So, the weekend is almost here, and I had a few podcasts lined up to watch. Because why not?

But last night, I saw a new podcast drop from Dwarkesh Patel with a living math genius, Terence Tao.

And if you don’t know who Terence Tao is, he started attending university-level math classes at age 9 and is still the youngest person to win gold, silver, and bronze medals at the International Math Olympiad. He also completed his PhD at age 21. He’s the kind of person who solves problems that have puzzled humanity for decades.

And here I am, talking about AI. And AI is basically math. So if someone with that level of brainpower is talking about AI, I’d love to spend my time listening.

To be honest, this podcast isn’t very easy to understand. At least for me. I had to rewatch a few parts, look up some historical context, take notes, and really think about what it all means for us.

And if you’re someone who genuinely worries about what’s going on in AI today, this article will be worth your time.

The Cost of a “Good Idea” is Now Zero

Traditionally, we’ve always romanticized the “Eureka!” moment in science as well as in real life too.

When we think of scientific progress, we picture a lone genius sitting under an apple tree, suddenly getting hit with a brilliant idea that changes the world.

Idea generation has always been the strange and prestige part of human progress. But Tao pointed out something much more interesting:

AI has driven the cost of idea generation down to almost zero.

To understand this, let’s look back at Johannes Kepler, the famous astronomer who discovered the laws of planetary motion.

Some of you might already know Kepler’s laws. And some of you might not, because you come from a different field. But if I had to sum up his work..

Kepler didn’t just wake up one day with the right answer. For twenty years, he tried completely random, almost crazy relationships to explain how planets moved. He tried matching their orbits to musical notes. He tried matching them to 3D geometric shapes (Platonic solids).

He basically threw thousands of random hypotheses at the wall to see what would stick.

That sounds kind of insane, right? But at the same time, it also feels familiar.

How?

That’s exactly what Large Language Models (LLMs) do today. They are essentially high-temperature Keplers. They can generate thousands of theories, code snippets, or business strategies in seconds.

But the point is.. having a million ideas doesn’t create abundance by itself. It creates noise.

Kepler only found the real laws of planetary motion because he finally got his hands on a massive, highly accurate dataset collected by another astronomer, Tycho Brahe. He had to verify his wild ideas against hard data.

Well, how Tycho Brahe collected this highly accurate data by setting up an observatory is a different story, and I’m not going to cover it in this article.

So, you might be thinking.. If generating ideas is now free and instant, what is the new bottleneck?

Verification.

Right now, we are in a situation where AI can generate a thousand possible solutions to a scientific problem, a coding bug, or a business bottleneck.

But someone or something, still has to figure out which of those 1,000 ideas is actually right, and which 999 are just well-written slop.

Human reviewers and experts are already getting overwhelmed. Scientific journals are being flooded with AI-generated papers. I’m not making this up, the AI community has pointed out the use of AI in research papers many times.

So, the future of progress isn’t just about building smarter AIs that can generate better ideas. It’s about building automated, rock-solid systems that can test, verify, and filter those ideas at scale.

Idea generation is solved. Fact-checking the universe is the new frontier.

Breadth vs. Depth

One of the most eye-opening parts of the podcast was when Tao talked about how AI solves problems right now.

Recently, AI agents managed to solve about 50 out of 1,100 open “Erdős problems” (a famous set of unresolved math puzzles). On social media, this looked like a massive, sudden leap. People thought AI was about to solve everything.

But then.. the progress completely paused. The AIs hit a wall. Why?

Tao explained it using a brilliant analogy.

Imagine human knowledge is a massive, dark mountain range. Inside this mountain range, there are walls you have to climb to make progress. Some walls are 3 feet high. Some are 15 feet high. Some are mile-high cliffs.

Because it’s dark, human scientists don’t know which walls are short and which are tall until they walk up to them. We spend years slowly climbing one specific cliff, this is called Depth.

Now, imagine we unleash millions of AI jumping robots into this dark mountain range.

These robots can’t climb. But they can instantly jump 6 feet into the air.

When we turned these AIs loose, they quickly bounced around the entire mountain range and instantly cleared every single 3-foot and 5-foot wall that humans hadn’t found yet. They picked all the low-hanging fruit at scale. This is called Breadth.

But when they hit a 20-foot cliff? They just crashed. They couldn’t grab a ledge, pull themselves up, and figure out the next step.

This tells us something incredibly important about where AI is right now:

Humans excel at depth. We can slowly, painfully figure out complex, multi-step problems that take years.
AI excels at breadth. It can apply known techniques to millions of problems simultaneously in seconds.
We shouldn’t expect AI to replace deep human expertise tomorrow. Instead, we need to redesign how we work to take advantage of this massive breadth.

Artificial Cleverness vs. Artificial Intelligence

This brings us to a really subtle but important distinction.

If you and I sit down to solve a hard problem, neither of us might know the answer at first. But we’ll try an idea, realize it’s 20% right, hold onto that 20%, pivot our strategy, and slowly build a cumulative understanding until we crack it.

Our minds adapt. Our understanding grows.

Current AI doesn’t do this.

When AI tackles a hard math problem (or a complex coding architecture), it doesn’t say, “Ah, this part didn’t work, but let me save this intermediate logic and build on it.”

It just brute-forces. It jumps, fails, and jumps again.

Even if an AI successfully writes a brilliant piece of code or solves a theorem, its own understanding of the subject hasn’t progressed. If you open a new chat window, it has forgotten everything.

It is incredibly clever. But it is not yet “intelligent” in the way we adapt and learn from our immediate struggles.

This is why AIs right now are succeeding mostly by applying known, standard techniques at a speed and scale that humans simply can’t match. And honestly? That’s still insanely powerful.

My Understanding

If you zoom out from the high-level math talk, Tao’s observations perfectly map onto our everyday jobs, whether you are a software developer, a writer, a researcher, or a designer.

The “Centaur Era” (Human + AI) is here, and it’s going to dominate for a long time.

Here is what that looks like in practice:

  1. The “Boilerplate” is gone forever.
    A hundred years ago, top mathematicians spent their days manually solving differential equations for physicists. Today, a computer does that in milliseconds.
    Similarly, writing boilerplate code, drafting standard emails, or formatting data is no longer a human job. AI will clear all those 3-foot walls for you.

  2. Your role is now high-level direction and validation.
    If AI can write a full-stack app in seconds, your job isn’t writing syntax anymore. Your job is architecture. It’s verifying the AI’s “cleverness” against reality.

  3. Curiosity is your biggest advantage.
    Tao pointed out that AI is making us over-optimize. We search for an answer, get exactly what we asked for, and move on.

But real human genius often comes from being a little inefficient, from accidentally reading the wrong research paper, casually running into someone in a hallway, or going down a random rabbit hole. AI doesn’t do that. It only does exactly what it’s told.

To stay ahead, you need to leave space in your life for curiosity and unplanned exploration.

So, it is very easy to get disappointed when AI fails to perfectly write a complex feature, or when it hallucinates a fact. We get used to the magic so quickly.

As Tao noted, 2026-level AI would have looked like pure witchcraft in 2021. Yet today, we yawn when an AI translates native audio, solves college-level physics, and generates realistic videos in seconds. We just take it for granted.

We might not have an autonomous, super-intelligent AI that can replace a genius like Terence Tao today.

But we do have a tool that can instantly jump over millions of low-hanging problems across every industry on Earth, freeing up human minds to climb the actual cliffs.

If that doesn’t feel like a revolution, I don’t know what does.

We just need to stop expecting it to be human, and start learning how to direct its incredible breadth.

Let me know what you think.
How are you using AI’s “breadth” in your own daily work? What’s the biggest “3-foot wall” it has knocked down for you recently?

Keep reading