upper waypoint

Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us … Not So Much

Save ArticleSave Article
Failed to save article

Please try again

A student works in the hallway at Stanford University's Institute for Human-Centered AI inside the Gates Computer Science building in Stanford, California, on March 23, 2023. The Institute released its 2026 AI Index report, which reveals that the technology is advancing faster than society’s ability to understand, govern, or trust it.  (Kori Suzuki for The Washington Post via Getty Images)

For nine years now, the AI Index Report from the Stanford Institute for Human-Centered AI (HAI) has combed through data from across academia, industry and government to produce an annual snapshot of where artificial intelligence stands, and suggest where it’s heading.

The report covers the biggest technical advances, investments, trends in education, health, legislation and the environment, offering an empirical foundation for understanding AI’s rapid evolution and real-world adoption.

The 2026 report also details a growing tension, especially among Americans: expert excitement about what AI is capable of, and public fear for what it all means for their personal lives and jobs.

“Are we well-positioned as a society to manage its direction, absorb its disruption and ultimately decide how we’re going to leverage this technology?” said Sha Sajadieh, who leads the AI Index for Stanford’s Institute for Human-Centered Artificial Intelligence.

She added that the general public needs to channel their fear of the unknown, not to mention news of mass layoffs in one industry after another, and move past reactivity to take advantage of the best AI has to offer. “Part of that is up-skilling at every age, in every way. There’s a lot of opportunity, but the onus is on us to fully realize the opportunity this technology presents us, and understand it.”

The survey is considered a must-read for policymakers in academia, business and politics. But as transparency from top AI developers declines, Sajadieh acknowledged it’s harder to know what needs to be addressed, especially with regulation or legislation, “for us to understand what risks we want to mitigate first as a society.”

“Enthusiasm and evangelism around AI have relegated considerations about how to responsibly manage its applications and use cases to the back burner,” Stephen Baiter, executive director of the East Bay Economic Development Alliance, wrote KQED.

He observed that jobs tied to the physical world, especially in areas like construction, health care, and public safety, seem to be at the least risk of disruption. But he has concerns beyond AI’s immediate impacts on labor markets. “There has been strong deference toward delaying or ignoring sensitive core human rights and quality of life issues related to individual/personal privacy, safety, and security.”

Other critics of AI go further. “The ones who don’t see eye to eye with the leading experts and the general public are the companies themselves, which are engaged in a race to replace humans as quickly as possible,” e-mailed Chase Hardin, spokesman for the non-profit The Future of Life Institute, which is dedicated to reducing global catastrophic and existential risks from transformative technologies.

Hardin said that public polling is unambiguously negative about the risks of AI. “We can argue about why that is, but the public is deeply skeptical of the companies themselves, the technology, and it is incredibly anxious about what it means for their children.”

Top takeaways of the AI Index Report include:

1. AI experts and the public have very different perspectives on the technology’s future. Assessing AI’s impact on jobs, 73% of U.S. AI experts said the technology’s impacts on jobs are positive, compared with only 23% of the public, a 50 percentage-point gap. Similar divides emerge regarding the economy and medical care.

Globally, trust in governments to regulate AI varies. Among surveyed countries, the United States reported the lowest level of trust in its own government to regulate AI, at 31%. Globally, the EU is trusted more than the United States or China to regulate AI effectively.

2. AI capability is accelerating and reaching more people than ever. Private companies built more than 9 in 10 of the world’s most powerful AI models in 2025, and some of those models are now beating human experts on PhD-level science and advanced math exams.

3. Productivity gains from AI are appearing in many of the same fields where entry-level employment is starting to decline. Studies show productivity gains of 14% to 26% in customer support and software development, with weaker or negative effects in tasks requiring more judgment.

In software development, where AI’s measured productivity gains are clearest, U.S. developers ages 22 to 25 saw employment fall nearly 20% from 2024, even as the headcount for older developers continues to grow.

4. Students are using AI, but their educational institutions are still playing catch-up. Four out of five U.S. high school and college students now use AI for schoolwork, but only half of middle and high schools have AI policies in place, and just 6% of teachers say those policies are clear.

A review of more than 500 clinical AI studies found nearly half relied on exam-style questions instead of real patient data, while just 5% used actual clinical data. (LPETTET via Getty Images)

5. AI is transforming clinical health care, but rigorous evidence remains limited. AI tools that automatically generate clinical notes from patient visits saw substantial adoption in 2025. Across multiple hospital systems, physicians reported up to 83% less time spent writing notes and significant reductions in burnout.

Beyond certain tools, however, the evidence base for clinical AI remains thin. A review of more than 500 clinical AI studies found that nearly half relied on exam-style questions rather than real patient data, with only 5% using real clinical data.

6. AI’s environmental footprint is expanding alongside its capabilities. Training a single AI model last year generated roughly as much carbon as 16,000 round-trip flights from San Francisco to New York. Researchers estimate that running just one widely-used AI model, GPT-4o, may consume enough water annually to meet the drinking needs of every person in Los Angeles and San Francisco combined.

7. The United States leads the world in AI investment, but its ability to attract global talent is declining. U.S. private AI investment reached $285.9 billion in 2025, more than 23 times the $12.4 billion invested in China — though looking at just private investment figures likely understates China’s total AI spending, given its government guidance funds.

The U.S. also led in entrepreneurial activity with 1,953 newly funded AI companies in 2025, more than 10 times the next closest country, which was the U.K. However, the number of AI researchers and developers moving to the U.S. has dropped 89% since 2017, with an 80% decline in the last year alone.

8. The U.S.-China AI model performance gap has effectively closed. U.S. and Chinese models have traded the lead multiple times since early 2025.

The U.S. still builds more of the world’s most powerful AI models, but China is publishing more research, filing more patents, and installing more robots in its factories. South Korea stands out for its innovation density, leading the world in AI patents per capita.

lower waypoint
next waypoint
Player sponsored by