$581 billion got poured into AI in 2025. Generative AI hit 53% adoption worldwide in three years. Faster than the personal computer. Faster than the internet. An AI system outscored physicians 85% to 20% on complex medical cases.
And somehow.. the 22-year-old developer who just graduated can't find a job.
That's the contradiction sitting at the heart of Stanford's 2026 AI Index Report. 423 pages of hard data of what actually happened across the AI economy, science, medicine, and education over the past year.
I went through just four chapters out of nine in detail. Economy, Science, Medicine, Education. And it’s just not about the numbers. It's the pattern that keeps repeating across all of them.
AI is getting better at an insane pace. But the world around AI.. the jobs, the hospitals, the classrooms, the regulations.. isn't keeping up. And the distance between those two speeds is growing.
If you are into AI and care about where we are heading as a human race because of it, then this article is worth your time.
Money! Money! Money!
Let's start with money. Not because it's the most important thing, but because it's the clearest way to see how concentrated this whole thing really is.
Global corporate AI investment crossed $581.69 billion in 2025. More than double what it was in 2024. Private investment alone was $344.7 billion.
And generative AI? $170.9 billion. That's a 200%+ increase in one year.
Those are wild numbers. But where did the money actually go?
The U.S. invested $285.9 billion in private AI. China? $12.4 billion. That's 23 times more. The U.K. came in at $5.9 billion. India at $4.09 billion.
Now before you take that at face value, the report itself points out a caveat. China’s numbers don’t include government guidance funds. These are state-backed investments, and between 2000 and 2023, about $184 billion of that went into AI companies alone. So the real gap is probably smaller. But the concentration in the private market is still clear.
And within the U.S., it becomes even more uneven. California alone accounted for $218 billion, over 75% of the total. More than half of U.S. states received less than $100 million. Some states got almost nothing.
28 funding events crossed $1 billion in 2025. Up from 15 the year before. OpenAI raised $40 billion at a $300 billion valuation. Anthropic raised $13 billion at $183 billion. The Cursor team hit $29.3 billion. A presentation startup called Gamma reached $2.1 billion.
The money is real. But it's going to maybe five places. And if you're not in one of those places, you might not feel any of this at all.
Now here's the part of the economy data that messed with my head the most.
Generative AI reached 53% adoption in three years. Again, faster than any technology in human history. But the country that's building most of it.. the United States.. ranks 24th in population-level adoption. 28.3%.
The UAE is at 64%. Singapore at 60.9%. France at 44%. Spain at 41.8%.
America is spending more on AI than the next several countries combined, and it ranks below Spain in actually using it. The report connects this to a more cautious public mood in the U.S., which I get. But I also think there's something deeper going on that nobody's fully explained yet.
OK. So the money is concentrated. The adoption is uneven. What about the people actually working in this space?
Employment for software developers aged 22 to 25 has fallen nearly 20% from its 2022 peak. Not developers overall. Just the youngest ones. The 26-to-30 bracket is fine. The 35-and-above brackets are still growing.
And this isn't just about the economy being rough. The report isolated AI exposure specifically. When you control for company-level effects, workers aged 22 to 25 in the most AI-exposed jobs saw employment drop about 16% compared to those in the least-exposed ones. The gap started widening in mid-2024 and it's been growing since.
One-third of organizations expect AI to reduce their workforce in the coming year. The biggest anticipated cuts are in service operations, supply chain, and.. software engineering.
But here's what makes this complicated. Because the productivity data tells the opposite story.
Customer support agents using AI resolved 14 to 15% more issues per hour. Developers using GitHub Copilot or Claude Code completed 26% more pull requests. Marketing teams saw 50% more output per worker. And in almost every study, less experienced workers benefited the most.
Read that again. The same junior workers who are getting the biggest productivity boost from AI.. are the ones losing jobs.
AI is making them more productive per unit. And companies are responding by needing fewer units. That's the tension. It's just sitting there in the data and nobody's resolving it.
And there's something else I think you should know about. Researchers found what they're calling a "learning penalty." Software engineers who relied heavily on AI tools while learning new things showed no measurable speed improvement over time. So you get the short-term output boost.. but you're not actually building the underlying skill. You're getting faster without getting better. And that means you become more dependent on the tool, not less.
Meanwhile, the value people are getting from AI keeps going up. U.S. consumer surplus from generative AI reached $172 billion annually by early 2026. Up 54% from the year before. The median value per user tripled. And most of these tools are still basically free.
So the economic picture is.. contradictory. People love the tools. Businesses love the productivity gains. And 22-year-olds are losing their first jobs. All of those things are true at the same time.
Every headline satisfies an opinion. Except ours.
Remember when the news was about what happened, not how to feel about it? 1440's Daily Digest is bringing that back. Every morning, they sift through 100+ sources to deliver a concise, unbiased briefing — no pundits, no paywalls, no politics. Just the facts, all in five minutes. For free.
What Happens When AI Enters the Lab and the Hospital?
The economy chapter shows the gap between investment and impact. The science and medicine chapters show the gap between performance on a test and performance in the real world.
AI is becoming a serious part of science now. Not just a topic to study.. a tool to do science with. AI-related publications in the natural sciences hit about 80,150 in 2025. Up 26% from 2024. AI now accounts for somewhere between 5.8% and 8.8% of all scientific output depending on the field. Back in 2010, those numbers were below 1%.
And some of the milestones are genuinely impressive.
An AI system called Aardvark Weather replaced the entire traditional numerical weather prediction pipeline with a single ML model. First time that's ever been done end-to-end. Another system, FourCastNet 3, now generates a 60-day global weather forecast in under 4 minutes. That's 8 to 60 times faster than what came before.
The first fully AI-generated scientific paper got accepted at a peer-reviewed workshop. Sakana's AI Scientist-v2 wrote a paper that made it into ICLR without human-coded templates.
But then you test whether AI can actually do scientific research.. like, full research.. and the numbers tell a very different story.
There's a benchmark called PaperArena. It tests whether AI agents can answer real research questions by pulling evidence from multiple papers and orchestrating external tools. The best AI agent scored 38.8%. PhD experts scored 83.5%.
On bioinformatics tasks? Frontier models hit about 17% accuracy. On replicating published astrophysics papers? Below 20%.
So AI can beat human chemists on a structured quiz with 2,700 questions. But hand it an actual paper and say "reproduce this," and it fails 4 out of 5 times.
That gap between benchmark and reality? It shows up in medicine too.
I found the medicine chapter fascinating because of one pattern.. smaller models are beating bigger ones. A protein language model with 111 million parameters outperformed previous leading methods on a major benchmark called ProteinGym. A 200-million-parameter genomics model called GPN-Star beat a model with 40 billion parameters on multiple tasks.
That's worth sitting with. Because it means the "just make it bigger" era might be ending, at least in biology. Data quality and training methods are winning over raw scale. I think that's a really important signal for the whole field.
Virtual cell models showed up in 2025. These are AI systems trying to simulate how a cell behaves.. how it responds to drugs, genetic changes, different stimuli. Evo 2 from the Arc Institute, DeepMind's AlphaGenome, and a few others. Think of it as trying to build a digital version of a cell. Still very early. Still needs real experimental validation. But it's the kind of thing that, if it works, changes drug development entirely.
On the clinical side.. AI models now outscore most doctors on structured medical evaluations. OpenAI's o1-preview hit 86% on management reasoning. Doctors with conventional resources scored 34%. A multi-agent system scored 85.5% on complex published case studies. Physicians without AI? 20%.
And ambient AI scribes.. tools that listen to patient visits and auto-generate the clinical notes.. went mainstream. Doctors reported spending up to 83% less time on documentation. One hospital system saw 112% ROI.
If you've ever watched a doctor spend more time looking at a screen than at you during an appointment.. this is the tool that fixes that. That's a real, tangible improvement in how medicine gets practiced.
But here's where the gap appears again.
The FDA authorized 258 AI medical devices in 2025. Sounds like a lot. It is. But only 2.4% of those with clinical studies were backed by randomized trial data. Almost everything else came through modification pathways that don't require new trials.
So the devices are getting into hospitals. The clinical evidence isn't keeping up.
And this part I think affects you directly. AI-generated summaries now show up at the top of 84% to 92% of health-related Google searches. If you search your symptoms, there's a 92% chance the first thing you see is an AI Overview. Before you talk to anyone. Before you see any doctor. You're getting AI health information with less oversight than the tools going through formal FDA review.
I want to mention one more thing from this chapter. A randomized trial tested medical digital twins.. computational models of individual patients.. on 150 people with diabetes. 71% of them hit healthy blood sugar levels over a year, while reducing their medications. That's genuinely promising. But the report noted that most digital twin studies still lack proper methodology. So even the good data needs more work.
Ethics discussions in medical AI publications more than doubled in 2025. But the conversation is thin. It's mostly governance. Things like algorithm accountability, biosecurity, global health equity? Barely touched. We're talking about the ethics. Just not the hard parts.
Is Anyone Actually Teaching This Stuff?
This is the chapter that worries me the most.
CS enrollment at U.S. four-year universities fell 11% between 2024 and 2025. But AI-specific master's graduates rose 17%. So students are reading the room. They're seeing what's happening to junior dev roles and they're adjusting.
Which is smart. But also kind of sad if you think about what it means. Students are choosing their field based on what AI might not automate, not based on what they're curious about.
And the usage numbers are wild.
Four out of five U.S. high school and college students use AI for schoolwork. Globally, 80% of university students have used generative AI for learning. That number was 40% in 2023. It doubled in two years. 56% of students who use AI do so at least once a day.
But only half of middle and high schools have any AI policies. And just 6% of teachers say those policies are clear.
Let that sink in. Students are using AI every single day. And the people responsible for guiding them basically have no playbook.
Anthropic's own data found that most students use Claude for higher-order thinking.. creating and analyzing, not just looking up facts. Which sounds good until you realize those are exactly the cognitive skills they're supposed to be developing on their own.
And 55% of U.S. college students said AI has had a "mixed effect" on their critical thinking. That's them saying it. Not the teachers. Not the critics. The students themselves aren't sure this is making them sharper.
Over 90% of countries now offer computer science in schools. But AI-specific education? Way behind. China and the UAE both mandated AI education starting with the 2025-26 school year, which is a big deal. But most countries haven't made that move yet.
Here's one finding I want to flag because it pushes back on something you've probably heard a lot. The number of new AI PhDs in the U.S. and Canada went up 22% between 2022 and 2024. But.. all of that growth went into academia. Not industry. The share of AI PhDs going to industry peaked at 77% in 2022 and has been falling since.
So the "brain drain" thing.. where everyone smart leaves universities for big tech.. might actually be reversing. That's quietly important. If more top researchers are choosing academia, the long-term research pipeline gets stronger. That matters for the kind of fundamental breakthroughs that industry doesn't always prioritize.
And one more thing. People are learning AI skills outside of formal education entirely. AI literacy is growing faster than engineering-level AI skills in most countries. People are learning to use the tools. Not build them.
Which connects to everything else. The adoption is spreading. The understanding is not.
My Take
The pattern across all four of these chapters is the same.
AI is getting better faster than the world can absorb it. The models improve. The benchmarks fall. The investment doubles. But the FDA approves devices without proper trials. Schools don't have AI policies. Young developers lose jobs while the same companies call AI a "productivity tool." And the country building most of this technology ranks 24th in actually using it.
The gap between AI's capability and everything around it.. the institutions, the regulations, the education, the labor protections.. is widening. Not closing.
I'm not being doomy here. I'm an optimist about this technology and where it's going. But I think the next real challenge in AI has nothing to do with making the models smarter. It's about making everything else catch up.
And right now, it's not.
What do you think? Are you seeing this gap in your own work or in your field? Let me know, I'd love to hear your take.
Here is the report from stanford (400+ pages): Artificial Intelligence Index Report
If you made it this far, you're not a casual reader. You actually think about this stuff.
So here's my ask. If this article made you think, even a little, share it with one person. Just one. Someone who's in the AI space. Someone who reads. Someone who would actually sit with these ideas instead of scrolling past them.
That's how this newsletter grows. Not through ads or algorithms. Through you sending it to someone and saying "read this."



