Sponsored by

Oh man, if you actually want to keep up with AI in 2026.. you have to be unemployed.

Like, literally. This is a full time job now.

I was going to write today purely about an interview Demis Hassabis and Sebastian Mallaby did on stage in San Francisco. Nine takes worth talking about. But before I get there, let’s talk about what happened in the last 24 hours across the AI space. Because honestly, you can't read the interview properly without knowing the ground these two were standing on.

A Random Tuesday in AI

Start with OpenAI.

Their global market share dropped from 64.5% in January 2026 to 56.72% in March. That's a code red.

And guess who's eating their lunch? Gemini first. Then Claude. Then Grok catching up from behind.

It's not just the numbers.

In March, OpenAI killed Sora to save compute. Then April hit, and the exits started. Sora lead Bill Peebles resigned. Science lead Kevin Weil resigned. Product chief went on medical leave. Marketing chief stepped down. COO Brad Lightcap got moved to "special projects."

They're bleeding top talent faster than they can ship models.

They're also stuck at 920 million users. They missed their 2025 growth targets. So yeah.. 2026 isn't going the way Sam pictured.

And yet.

OpenAI just launched GPT Image 2. And whether you like the company or not, this launch is genuinely good. It feels like image generation is basically solved now. I tested it on a bunch of prompts. The output is damn good. It's free. Go try it and tell me what you think.

Small fun fact. Sam talked about this model a year ago. Took them a year to ship it.

Now Anthropic.

Head of growth Amol Avasare just confirmed they're running a small test on about 2% of new prosumer signups where they pull Claude Code from the Pro plan.

Why? Because subscribers are paying way less than the actual cost of tokens they're burning. So they're exploring more usage limitations.

Alsi, Opus 4.7 isn't a huge step up from 4.6 in raw intelligence. But it consumes way more tokens. Which means rate limits hit faster. And it makes my frustrated tbh.

I thinnk.. Anthropic stays premium. But they'll lose casual users. And OpenAI is going hard after exactly those users right now.

Third, SpaceX just dropped a $60 billion bomb.

They're partnering with Cursor to build "the ultimate coding AI." They hold an option to acquire Cursor outright for $60 billion later this year. Or pay $10 billion for a joint venture.

Cursor president Oskar Schulz said SpaceX has "an enormous amount of compute." They're plugging Cursor into Colossus. That's a million Nvidia H100s pointed at software development.

Elon is moving fast ahead of a record IPO. The vibe coding era is about to get insane.

Fourth, Google quietly pushed Deep Research into the Gemini API. MCP support. Native chart and infographic generation. 93.3% on DeepSearchQA. 54.6% on HLE.

Serious numbers. Minimal fanfare. Which is actually Google's current style.

And this.. is just a Tuesday.

I'm telling you all of this because the interview doesn't hit the same without it. Demis and Mallaby weren't sitting in some clean abstract studio. They were right in the middle of all of this. The exits. The market share slip. The $60 billion coding bet.

Once you know what they were reacting to, the things they said read very differently.

You don't need to be technical. Just informed.

Most AI newsletters are written for engineers. This one isn't.

The AI Report is read by 400,000+ executives, operators, and business leaders who want to know what's happening in AI — without wading through code, jargon, or hype.

Every weekday, we break down the AI stories that matter to your business: what's being deployed, what's actually working, and what it means for your team.

Free. 5 minutes. Straight to the point.

Join 400,000+ business leaders staying ahead of AI — without the technical overwhelm.

Eight Things From One Stage

Quick context on who Mallaby is, because it matters.

Sebastian Mallaby is a veteran financial journalist. Council on Foreign Relations. Wrote the definitive book on hedge funds. He's not an AI insider. When a finance guy talks about AI economics, he's not selling you a model or a roadmap.

Demis is.. well.. Demis. DeepMind. AlphaFold. Nobel Prize in Chemistry in 2024. If anyone on earth has earned the right to talk about both AI and medicine, it's him.

Let me go through the nine things they said. And for each one, I want to tell you what's honest, what's opinion, and what's actually checkable against the industry.

First. Mallaby said there's a 50% chance OpenAI goes bankrupt in the next 18 months.

When I first heard this, honestly, I laughed. OpenAI? Bankrupt? Come on.

But then I think about it. And paired it with everything I told you at the top.

Market share down from 64.5% to 56.72% in two months. Top leadership bleeding out. Stuck at 920 million users. Missed 2025 growth targets. Killed Sora to save compute.

Suddenly 50% is less funny.

Is it actually 50%? No, I don't think so. Mallaby is a finance guy on a stage. He's there to say something people remember. The number is aggressive on purpose.

But the thing he's pointing at is real.

Tech companies don't fail overnight. They fail slowly. Compute costs compound. The next funding round gets harder. You raise at a lower valuation. Confidence cracks. Then it snowballs.

OpenAI is burning money at a scale nobody in tech has ever seen. And the margins aren't getting better. They're getting worse.

Second. Demis said Dario Amodei is the best of all the other lab leaders.

Opinion, not fact.

Dario runs Anthropic like a research lab. Publishes safety papers. One of the few CEOs who consistently talks about what shouldn't be shipped. Demis and Dario both came up through research. Of course they see each other as peers.

What this really tells you is what kind of leadership Demis respects. Research-led. Safety-conscious. The opposite of "move fast and ship."

Third. Also from Mallaby. On private companies controlling frontier AI defense. He said it's not really tenable for a private company to decide who gets access to the frontier of cyber defense (like mythos). And he asked what happens when China can do this in 6-12 months.

The China timing matches what most analysts covering Chinese labs are saying. The top Chinese labs are roughly 3-6 months behind the US frontier. Not 5 years. Months.

And the policy question is serious. Right now three US companies effectively decide who gets access to the most powerful cyber capabilities on earth. That's not a stable arrangement. Washington is having this exact debate right now.

Fourth. Demis said not all countries are pessimistic about AI. He'd just been to India for the summit Modi hosted. And India is quite optimistic.

Easy to verify. The summit happened. India has one of the highest AI adoption rates in the world among professionals.

The reason? India has a lot to gain and relatively little legacy infrastructure to protect. Most Western AI discourse treats "AI is scary" as the default. It isn't. That's a US and Europe default. The rest of the world is running toward this, not away from it.

Fifth. Demis said the most exciting current prospect in AI is his work at Isomorphic Labs. That AlphaFold is just one of many problems we need to solve. We need six AlphaFold moments to compress drug delivery from 10 years to a few months.

This claim is both true and wildly optimistic at the same time.

AlphaFold 3 is real. Predicts protein structures and interactions at accuracy nobody thought possible a few years ago.

But drug development has roughly six stages. Target identification. Lead discovery. Preclinical testing. Phase 1, 2, and 3 trials. Regulatory approval.

AlphaFold helps with stages one and two. The rest is biology, human bodies, and regulation.

Even with six AlphaFold-level breakthroughs across every stage, you still have to put drugs into human bodies for years to prove they're safe. Phase 3 trials take 3-4 years. The FDA isn't going to waive that because your model is smart.

So "10 years to a few months" is the aspirational version. The realistic version is probably "10 years to two or three." Which is still one of the most important shifts in medicine in 100 years.

Demis is pitching. I get it. I'd rather have him too optimistic than not trying.

Sixth. On AGI, Demis described a post-scarcity world. Unbelievable amounts of science on the bright side. But we'll have to figure out how to share the proceeds fairly. And he said we'll need great new philosophers to answer the questions that come after.

This is a worldview more than a fact. But honestly.. I think it's the most important thing he said on that stage.

Because we keep treating AGI like a finish line. Build the model. Solve intelligence. Done.

But if it actually works.. that's when the real problem starts.

We don't have political systems built for abundance. We barely handle scarcity. "Great new philosophers" sounds nice on stage, but philosophy as a field isn't exactly producing generational thinkers right now. Hoping it will when we need it.. I don't know. I hope so too. But hope isn't a plan.

Seventh. Career advice. Demis said immerse yourself in AI tools. Everyone has access to tools 3-6 months behind frontier. The opportunity is applying AI to unexplored areas.

This one is practical and correct.

If you're an individual building today, you're working with tech that Google and OpenAI had in their labs a year ago.

That's still enormous capability. The opportunity isn't being first. It's applying what already exists to a problem nobody's looked at yet.

Eighth. And this one is important..

What Demis Wanted

Demis said when he started building this technology, he pictured a future quite different from this. More like CERN researchers. Where you discuss ideas, help each other out, stress-test each other's work.

Then he said it's his job to help make sure we make more considered, more scientific, more rigorous decisions. That social scientists and economists need to be involved. That the decisions made in the next 5-10 years will affect us for thousands of years.

And he ended by saying.. he remains very optimistic.

I keep coming back to this one.

Because the gap between what Demis imagined and what the industry actually became is basically the whole story of AI right now.

He wanted CERN. He got a Formula 1 race.

CERN moves slow. Shares results. Publishes openly. Peer reviews everything. No single company owns the frontier.

Frontier AI labs do almost the opposite. They race. They keep research secret. They ship every few weeks under commercial pressure. Top talent moves based on comp, not ideas.

And remember what I told you at the start. OpenAI losing share. Leaders exiting. Anthropic pulling features. SpaceX dropping $60 billion on coding agents. Google shipping quietly.

That's not a CERN. That's a stampede.

And yet Demis still says he's optimistic.

I think he genuinely is. Not naively. The kind of optimism that comes from someone who knows exactly how hard this is and keeps working anyway.

But here's what I want you to notice.

The only thing stopping the race from being purely commercial is that a handful of people at the top still care about the science. People like Demis. Yann LeCun. A few others. That's a very thin layer. And it's carrying a lot of weight right now.

If those people leave or get pushed out, the whole thing tips over into pure competition. That's not a scientific future. That's just another oil rush with smarter machines.

So that's where I land.

The reel is that AI is in its most exciting moment ever. Which is true.

The reality is that the people building it are running hard, most of them in public chaos, and the ones actually slowing down to think are a smaller group than the industry wants you to believe.

I'm still optimistic. For what it's worth. But my optimism is the kind that pays attention.

Out of those eight things.. which one stuck with you? And honestly.. do you think the CERN version of AI was ever actually possible? Or was the race baked in from day one?

Let me know. I read everything.

If you made it this far, you're not a casual reader. You actually think about this stuff.

So here's my ask. If this article made you think, even a little, share it with one person. Just one. Someone who's in the AI space. Someone who reads. Someone who would actually sit with these ideas instead of scrolling past them.

That's how this newsletter grows. Not through ads or algorithms. Through you sending it to someone and saying "read this."

Reply

Avatar

or to participate

Keep Reading