Synthetic intelligence has achieved mass adoption quicker than the private laptop or the web, reaching 53 p.c of the inhabitants in simply three years. The variety of dangerous AI incidents has elevated correspondingly. And each consultants and laypeople consider the affect can be felt in two areas: Elections and relationships.
In accordance with the 2026 AI Index Report [PDF], from Stanford College’s Institute for Human-Centered Synthetic Intelligence (HAI), “Accountable AI just isn’t holding tempo with AI functionality, with security benchmarks lagging and incidents rising sharply.”
Documented AI incidents – outlined as “harms or close to harms realized in the actual world by the deployment of synthetic intelligence techniques” by the AI Incident Database – reached 362 in 2025, up from 233 in 2024, the report says.
That coincides with a rise in AI adoption: 88 p.c of organizations say they’re utilizing AI and about 80 p.c of college college students admit as a lot.
One potential clarification for that discovering is that AI fashions have change into fairly good at programming, with scores on the SWE-bench check of success tackling real-world GitHub points rising from 60 p.c to shut to 100% within the area of a yr.
Excessive scores on a specific benchmark do not inform the total story as a result of all AI fashions are usually poor in several areas. On the AA-Omniscient Index, designed to evaluate whether or not fashions will admit after they’re uncertain about one thing as a substitute of simply guessing, hallucination charges throughout 26 fashions assorted from 22 p.c to 94 p.c.
When attorneys use AI fashions to make “over two dozen faux citations and misrepresentations of truth,” and get referred to as out for it by the US Sixth Circuit Court docket of Appeals, that is an instance of what the Stanford HAI researchers imply after they say accountable AI hasn’t saved tempo with utilization.
And regardless of all of the discuss AI superintelligence, AI lags behind folks in the case of telling time – OpenAI’s GPT-5.4 Excessive managed to learn analog clocks appropriately simply 50.6 p.c of the time as of March 2026, in comparison with about 90 p.c for “unspecialized people,” as described within the ClockBench benchmark [PDF].
Robots reveal even much less competence, succeeding in solely 12 p.c of family duties, primarily based on the BEHAVIOR-1K simulation benchmark.
The HAI report, at 423 pages, represents the Stanford group’s abstract of the present state of AI analysis and its affect on society. Written by human researchers with assist from ChatGPT and Claude, to not point out monetary assist from Google, OpenAI, and others, the report’s findings lengthen past the shortage of “accountable AI” to the touch on numerous points of the AI {industry}.
When it comes to public opinion, the report finds “AI consultants and the US public disagree on practically all the things about AI’s future, besides that it’s going to harm elections and private relationships.”
Sixty-four p.c of the American public count on AI will scale back the variety of jobs obtainable to people over the subsequent twenty years, whereas 5 p.c foresee AI creating extra jobs. Solely 39 p.c of consultants anticipate fewer jobs whereas 19 p.c of consultants challenge extra employment. Specialists, nonetheless, consider that generative AI will contribute to 80 p.c of US work hours by 2030, in comparison with the general public’s prediction of 10 p.c.
Simply 31 p.c of US respondents stated they belief of their authorities to manage AI responsibly, the bottom stage of any nation. With OpenAI backing an Illinois state invoice that might restrict the legal responsibility of AI firms within the occasion their fashions trigger catastrophic hurt, and the White Home pursuing an “industry-friendly AI coverage,” it isn’t troublesome to see how Individuals may need doubts about their authorities’s curiosity in defending them.
The HAI report observes that Chinese language AI fashions have closed the efficiency hole with US AI fashions. As of March 2026, the highest US mannequin, Claude Opus 4.6 scored 1,503 on the Area benchmark, simply 2.7 proportion factors above ByteDance’s Dola-Seed Preview at 1,464. That lead had narrowed as of April 9, 2026, with Claude Opus 4.6 Pondering at 1,548, intently adopted by Z.ai’s GLM-5.1 at 1,530.
The US continues to steer in AI funding, stated to have reached $285.9 billion in 2025. That is 23 instances greater than the $12.4 billion invested in China, although the report notes it might have under-counted authorities funding. Even so, the US is dropping technical expertise. “The variety of AI researchers and builders transferring to the US has dropped 89 p.c since 2017, with an 80 p.c decline within the final yr alone,” the report finds. ®
Synthetic intelligence has achieved mass adoption quicker than the private laptop or the web, reaching 53 p.c of the inhabitants in simply three years. The variety of dangerous AI incidents has elevated correspondingly. And each consultants and laypeople consider the affect can be felt in two areas: Elections and relationships.
In accordance with the 2026 AI Index Report [PDF], from Stanford College’s Institute for Human-Centered Synthetic Intelligence (HAI), “Accountable AI just isn’t holding tempo with AI functionality, with security benchmarks lagging and incidents rising sharply.”
Documented AI incidents – outlined as “harms or close to harms realized in the actual world by the deployment of synthetic intelligence techniques” by the AI Incident Database – reached 362 in 2025, up from 233 in 2024, the report says.
That coincides with a rise in AI adoption: 88 p.c of organizations say they’re utilizing AI and about 80 p.c of college college students admit as a lot.
One potential clarification for that discovering is that AI fashions have change into fairly good at programming, with scores on the SWE-bench check of success tackling real-world GitHub points rising from 60 p.c to shut to 100% within the area of a yr.
Excessive scores on a specific benchmark do not inform the total story as a result of all AI fashions are usually poor in several areas. On the AA-Omniscient Index, designed to evaluate whether or not fashions will admit after they’re uncertain about one thing as a substitute of simply guessing, hallucination charges throughout 26 fashions assorted from 22 p.c to 94 p.c.
When attorneys use AI fashions to make “over two dozen faux citations and misrepresentations of truth,” and get referred to as out for it by the US Sixth Circuit Court docket of Appeals, that is an instance of what the Stanford HAI researchers imply after they say accountable AI hasn’t saved tempo with utilization.
And regardless of all of the discuss AI superintelligence, AI lags behind folks in the case of telling time – OpenAI’s GPT-5.4 Excessive managed to learn analog clocks appropriately simply 50.6 p.c of the time as of March 2026, in comparison with about 90 p.c for “unspecialized people,” as described within the ClockBench benchmark [PDF].
Robots reveal even much less competence, succeeding in solely 12 p.c of family duties, primarily based on the BEHAVIOR-1K simulation benchmark.
The HAI report, at 423 pages, represents the Stanford group’s abstract of the present state of AI analysis and its affect on society. Written by human researchers with assist from ChatGPT and Claude, to not point out monetary assist from Google, OpenAI, and others, the report’s findings lengthen past the shortage of “accountable AI” to the touch on numerous points of the AI {industry}.
When it comes to public opinion, the report finds “AI consultants and the US public disagree on practically all the things about AI’s future, besides that it’s going to harm elections and private relationships.”
Sixty-four p.c of the American public count on AI will scale back the variety of jobs obtainable to people over the subsequent twenty years, whereas 5 p.c foresee AI creating extra jobs. Solely 39 p.c of consultants anticipate fewer jobs whereas 19 p.c of consultants challenge extra employment. Specialists, nonetheless, consider that generative AI will contribute to 80 p.c of US work hours by 2030, in comparison with the general public’s prediction of 10 p.c.
Simply 31 p.c of US respondents stated they belief of their authorities to manage AI responsibly, the bottom stage of any nation. With OpenAI backing an Illinois state invoice that might restrict the legal responsibility of AI firms within the occasion their fashions trigger catastrophic hurt, and the White Home pursuing an “industry-friendly AI coverage,” it isn’t troublesome to see how Individuals may need doubts about their authorities’s curiosity in defending them.
The HAI report observes that Chinese language AI fashions have closed the efficiency hole with US AI fashions. As of March 2026, the highest US mannequin, Claude Opus 4.6 scored 1,503 on the Area benchmark, simply 2.7 proportion factors above ByteDance’s Dola-Seed Preview at 1,464. That lead had narrowed as of April 9, 2026, with Claude Opus 4.6 Pondering at 1,548, intently adopted by Z.ai’s GLM-5.1 at 1,530.
The US continues to steer in AI funding, stated to have reached $285.9 billion in 2025. That is 23 instances greater than the $12.4 billion invested in China, although the report notes it might have under-counted authorities funding. Even so, the US is dropping technical expertise. “The variety of AI researchers and builders transferring to the US has dropped 89 p.c since 2017, with an 80 p.c decline within the final yr alone,” the report finds. ®















