So, yesterday was Sunday, and I spent a few hours watching a great podcast of Andrej Karpathy on the No Priors podcast. And if you are in the AI space or have been following me for a while, you already know this man needs no introduction. I talk about him and his work a lot.
He covered many things in this podcast that I think you should know about, like software engineering, AI agents, creation, and more.
Recently, he also published a list on GitHub about the jobs AI is most likely to replace in the coming years. As expected, most digital work ranked high. For example, software engineering scored 9 out of 10, meaning it is highly likely to be replaced.
Later, he deleted the list, saying it was misinterpreted and didn’t truly reflect his views. He also clarified that he didn’t assign the scores, an LLM did, as it was just a two-hour vibe-coded project.
So yeah, there’s a lot to talk about in this article. But if you ask me what I think about all this, especially about software engineering, it’s simple: coding was never just about writing scripts. It’s about problem-solving and thinking, just like math and science.
AI might replace script writers, but it won’t replace problem solvers.
And maybe that’s why you might have heard the news that OpenAI is doubling down on its workforce. I’m not sure how accurate that is, but it gives a strong signal of where things are heading.
But hey, enough deep talk.
The Era of “AI Psychosis” and the End of Typing
So, if you are a software engineer and I ask you what you do, you’ll probably say you write code. You type some colorful letters into an IDE to make it perform tasks. Cool, right?
But according to Karpathy, typing code is dead.
He said he hasn’t typed a single line of code since December. A man who has spent two decades doing phenomenal work in AI research has completely changed how he works. He went from writing 80% of his code and letting AI handle 20%, to letting AI do 100% of it.
He described it as living in a state of “AI psychosis,” working 16-hour days, not typing code, but directing multiple AI agents to get things done.
Think about that for a second.
Most of us think coding means writing syntax. But now, coding is no longer about syntax. It’s about macro actions. It’s about managing a team of digital workers.
You tell Agent A to research a solution, Agent B to write the architecture, and Agent C to test for bugs.
And if the output fails?
It’s no longer the AI’s fault. It’s what Karpathy calls a “skill issue.”
We are no longer limited by how fast we can type or how much compute power we have. We are limited by our own ability to give clear instructions, provide the right context, and orchestrate these models.
Technically, if we don’t get it right, it means we are now the bottleneck, as per Karpathy.
So, officially, here we are!!
Also, when I say “Agent,” you might be thinking of ChatGPT or some other model. But an agent is much more than just a text bot.
So, let me simplify it a bit:
Let me simplify it:
An agent is an AI that doesn’t just wait for you to talk to it. It operates in a loop. It has a sandbox, it has memory, and it can use tools to take actions on your behalf, even when you aren’t looking. Karpathy calls them “claws”, entities that reach out into the digital world and manipulate it for you.
Let me explain what this looks like in real life.
Karpathy created a personal AI agent for his house and named it “Dobby.” He didn’t spend weeks programming it. He just asked the agent, “Hey, I think I have a Sonos sound system on my network. Can you find it?”
The agent:
Scanned his local network.
Found the Sonos IP addresses.
Realized there was no password protection.
Reverse-engineered the API endpoints via web searches.
Asked Karpathy, “Do you want to play some music?”
Within three prompts, he was playing music in his study. The agent then did the same for his lights, his HVAC, his pool, and his security cameras. He tied it all to a WhatsApp number. Now, when he texts “Sleepy time,” Dobby turns off the whole house. When a delivery truck pulls up, Dobby looks through the security camera, identifies the FedEx logo, and texts him a photo saying, “You’ve got mail.”
This is the end of the “App Era.” We don’t actually want 50 different apps on our phones to control our lives. We want one intelligent entity that understands our intent and handles the technical details in the background.
We want JARVIS.
But I think we have a few problems here.
The “Jagged” Mind of AI
You might be thinking if AI is so smart that it can takeover a smart home in three prompts, why does it still make stupid mistakes?
That’s a fair question.
If you are into coding, you know what I am talking about. In one minute, AI writes a flawless full-stack app, and in the next, it struggles to perfectly centre a div. Karpathy described this perfectly:
Interacting with AI today feels like talking to a brilliant system-programming PhD student who is simultaneously a 10-year-old child.
This is what we call “jagged intelligence.”
Here’s why it happens: Models are trained using Reinforcement Learning (RL). They are heavily optimized for tasks that have clear, verifiable metrics, like writing a piece of code that passes a unit test.
If an AI writes a CUDA kernel to make a program run faster, we can verify instantly if it worked.
But what about soft skills? What about nuance?
If you ask ChatGPT to tell you a joke today, it will likely give you the exact same joke it gave you three months ago.
Why hasn’t the joke gotten better while the coding got 100x smarter? Because humor isn’t strictly verifiable. It’s off the optimization rails. AI is a genius when it has clear rules, and a wandering child when it doesn’t.
AutoResearch: Removing Humans from the Loop
Let’s connect the dots.
If AI is a genius at verifiable tasks, what is the most verifiable task in the tech world right now?
Making AI better.
Karpathy built a system called “AutoResearch.” The concept is simple but powerful.
Normally, a human researcher spends weeks tweaking hyper-parameters, testing theories, running experiments, and checking results to make an AI model perform slightly better. But humans are slow. We sleep. We have biases. We hold onto “earned confidence” from our past experiences that might actually be wrong.
So, Karpathy wrote a script giving the AI boundaries, a goal, and the ability to run its own tests. He let it run overnight.
When he woke up, the AI had already found improvements and small changes he completely missed.
We are entering the era of recursive self-improvement. The major AI labs aren’t just building smarter models for you and me to use; they are building autonomous scientist agents that run 24/7, pulling ideas from research papers, writing code, testing it, and making themselves smarter.
We are in the phase of continual learning of AI.
Atoms vs. Bits: Why You Shouldn’t Panic About Terminators Just Yet
With all this rapid advancement, you might be wondering: What about physical robots? Are they going to take over the physical world tomorrow?
The short answer is: No. Not yet.
Here is the fundamental difference between the physical world and the digital world: Atoms are a million times harder to manipulate than bits.
In the digital world, AI can copy, paste, delete, and iterate almost instantly. This “unhobbling” of digital tasks will lead to a huge boost in productivity and major changes in how things are done. Software engineering, data analysis, content creation, these digital-first fields are going to change drastically in the next few years.
But the physical world is different. It’s complex and unpredictable. On top of that, hardware is expensive, which makes robotics much slower to progress.
So while your digital life might soon be handled by a “Dobby,” you won’t see a humanoid robot perfectly folding your laundry or cooking your dinner anytime soon.
The real opportunity right now is at the interface, using AI to read data from physical sensors (like cameras or lab tools) and turn it into something useful in the digital world.
My Take
If you built a software library a few years ago, you had to write extensive documentation so other humans could read it, learn it, and use it.
But Karpathy made an observation about his recent projects: He doesn’t write documentation for humans anymore. He writes markdown files for agents.
If an AI agent can understand the core logic of your code, you don’t need to explain it to a human. The human can just ask their agent: “Explain this to me like I’m a beginner.” Or, “Explain this to me in Python.” The agent becomes the ultimate personalized tutor, possessing infinite patience and perfect adaptability.
Our job is no longer to explain things to each other. Our job is to explain things to the agents, so the agents can explain them to the world.
So, if there is one massive takeaway from all of this, it’s that we are moving from the “Prompt Era” to the “Loop Era.”
We are moving away from treating AI like a smart search engine that gives us an answer and stops. We are entering a world of autonomous loops, where AIs talk to other AIs, optimize their own code, run our homes, and reshape the digital economy while we sleep.
Let me know what you think. Have you started using AI as an autonomous agent yet?
What was your “I am the bottleneck” moment?
The next idea is already on its way, join my newsletter: https://ninzaverse.beehiiv.com/
Every headline satisfies an opinion. Except ours.
Remember when the news was about what happened, not how to feel about it? 1440's Daily Digest is bringing that back. Every morning, they sift through 100+ sources to deliver a concise, unbiased briefing — no pundits, no paywalls, no politics. Just the facts, all in five minutes. For free.



