In partnership with

Remember when GPT-3 was the smartest thing on the internet? I think it was around 2020. Companies were lining up for API keys, building businesses on top of it, and paying high prices for tokens. That was the story from 2020 to 2022.

Today, GPT-3 feels like a museum piece. No one really talks about it anymore.

The weights didn’t change. The neurons didn’t change. The model still works exactly the same as it did when it was first released. But economically, it’s now worth almost nothing. And in the AI world, no one seems surprised by that.

Two researchers just published a paper that puts actual math on what's happening here. It's called "The Economics of Digital Intelligence Capital.", the equation they used is a bit heavy.

But I wanna talk about it in simple terms. Because once you see it, a lot of weird stuff about the AI industry suddenly makes sense.

An asset class that ages backwards

I think you know how normal capital works. You buy a factory. It produces things. Over 20 years, it slowly loses value because machines break, parts wear out, and paint chips. The decay is physical, and you can predict it. Accountants have spreadsheets for it.

AI models don't work like that.

The paper introduces "Digital Intelligence Capital" as a separate asset class. And the thing that makes it different is this. Its value isn't based on what it can do. It's based on what it can do compared to whatever shipped last week.

That's the whole game. Relative, not absolute.

So when Anthropic releases a new Claude, every model that came before it doesn't just look slightly older. It economically depreciates. Not because the older model got worse, but because the standard moved.

The authors ran simulations showing that a frontier model loses about 23% of its economic value per year just from the global frontier moving forward. If a competitor breaks ahead and you don't keep up, that number jumps to 40% or more. Per year. Without you doing anything wrong.

Just process that for a second. You can spend a billion dollars training a model. The model can be working perfectly. And by next quarter, almost half of that billion is gone. Not because anything broke, but because someone else held a keynote.

The paper calls this the Red Queen Effect, after the line in Alice in Wonderland about how it takes all the running you can do just to stay in the same place.

Claude is not just a chatbot anymore. Is your security team ready?

Claude.ai is one thing. Claude Cowork with MCP connections, running agentic workflows, taking actions across your data with ungoverned skills? That is a different conversation entirely, and most security teams are not equipped to govern it.

Harmonic Security is built to secure everything Claude offers. Full browser controls for Claude.ai, deep governance over agentic MCP workflows, and real-time visibility into what Claude is doing across your organization. So your CISO can say yes to the tools your business is already demanding.

Why nobody in this industry can ever stop

Once you see the Red Queen, the chaos in this industry starts looking less like chaos and more like physics.

And there's a question.. Why can't anyone in this industry just slow down for a year?

Anthropic ships a new model every few months. OpenAI keeps raising funding rounds like a startup that's three days from default. Meta pours tens of billions into something that still hasn't shown a clear return. None of them can pause. Even when the cost is brutal. Even when investors start asking uncomfortable questions.

Because pausing isn't conservative in this industry. Pausing is liquidation.

Stop investing for a year and your model becomes the "legacy" option in the dropdown. Customers move on to whoever shipped last. Your team is fine. Your servers are fine. The model still works exactly the way it did. But the economic value of what you own is mostly gone.

This is why the Stargate announcement made sense to me even when everyone was laughing at the $500 billion number. From the outside, that capex looks insane. From the inside, slowing down is the thing that kills you. Spending too much you can survive. Spending too little, you don't.

The paper has another interesting finding

Think about how the leader in AI gets bigger. More users come to the best model. Those users generate feedback data just by using it. That data goes back into training and makes the model even better. Which attracts more users. And the loop keeps going.

The authors show that past a certain point, this loop stops being competitive. The leader pulls away, and the gap stops being closeable. Nobody has to be cheating. The math itself wants the market to end up with one or two winners.

And honestly, look around. Three or four labs are pulling ahead. Everyone else is either getting acquired, finding a niche the big models haven't absorbed yet, or quietly pivoting away from foundation models. That's not a coincidence. That's the loop doing its thing.

My take

The paper isn't really telling the AI industry anything it doesn't already know. Every founder I read interviews with talks about velocity, about shipping, about not being able to take a breath. They live this every day.

What the paper does is name it. Put math on it. Show that it's not just a vibe, it's how the industry actually works underneath.

And that changes how you think about regulation. The authors argue that the usual antitrust playbook doesn't really fit here. You can break up a dominant lab, sure. But if the underlying setup still rewards concentration, the pieces just grow back together. You're treating the symptom.

So they propose something different. Make labs share their user feedback data. Let interaction logs move between companies. Basically, take the thing that lets the leader pull ahead.. and make it less proprietary. Fix the structure, not the company.

I don't know if any of that gets implemented. The political will to regulate AI in any meaningful way feels weaker every month, not stronger. But at least the framing is correct. You can't fix a structural problem with case-by-case legal action.

There's also a section aimed at downstream startups. The authors call it the Wrapper Trap. If your application is just a thin layer on top of a foundation model, the model will absorb your features faster than you can build new ones. The math is brutal on this. Most GenAI wrappers are doomed by the production function itself, not by their team or their funding or their go-to-market motion.

The escape route, the authors say, is to build something the foundation model can't easily eat. That usually means having data the big labs don't have. Or being plugged into something physical, like robots or hardware. Or owning a domain so specific that no general-purpose model is going to wake up tomorrow and just.. do it.

Anything thinner than that gets absorbed.

I keep wondering how many "AI startups" we're celebrating right now are basically just sitting on top of GPT or Claude with a nice UI. Some of them are doing well today. But the math in this paper says a lot of them won't survive the next big model release. The founders probably don't know yet which side they're on.

And here's the part I keep coming back to. A normal business is something you can rest on. You build a factory, it keeps making money even if you take a week off. AI labs don't get that. The moment they stop pushing, the value of what they own starts disappearing. There's no version of this where they get to relax.

I don't know if there's a steady state at the end of all this. Maybe the frontier saturates and the running slows down. Maybe it doesn't. Maybe some lab a decade from now is still pouring trillions into model training because their shadow price would collapse the day they stopped. That's a strange world to be building toward, and I don't think any of us are really thinking about what it means yet.

Let me know what you think. If you had to put real money on which lab is still standing in 2030.. who's your bet?

If you made it this far, you're not a casual reader. You actually think about this stuff.

So here's my ask. If this article made you think, even a little, share it with one person. Just one. Someone who's in the AI space. Someone who reads. Someone who would actually sit with these ideas instead of scrolling past them.

That's how this newsletter grows. Not through ads or algorithms. Through you sending it to someone and saying "read this."

Reply

Avatar

or to participate

Keep Reading