“I think we’ve achieved AGI.”
Well, these are not my words. These are the exact words of Jensen Huang, which he said on a podcast recently.
After that, he was trolled left and right on X and Reddit. Most people think he’s just a guy selling AI. That’s it.
But I have a question.
I understand AGI might be subjective right now because everyone has a different opinion about it. Almost no one agrees on what it actually looks like or how to measure it, which is kind of interesting. But at least there should be some way to measure it, right? If we had that, it would make things much clearer.
All this chaos and hype is also making people underestimate the current state of AI.
So the question still remains: what is AGI? And how do we measure it when public opinion is so chaotic?
For that, we can look at what researchers are thinking. Recently, Google DeepMind dropped a research paper titled: “Measuring Progress Toward AGI: A Cognitive Framework.”
If you’re into AI research, I recommend reading it. And if you don’t have the time, I’ve got you covered, because this article will be worth your time.
Tired of news that feels like noise?
Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.
What is the core problem with measuring AGI?
When you get a traffic challan for speeding, that’s when you realize you were actually driving fast last night. And after seeing the fine amount, your temperature goes up and suddenly you need a thermometer.
I know this analogy isn’t perfect, but the point is simple: we have tools like a speedometer and a thermometer to measure specific things. But how do you measure general intelligence?
For the past few years, we’ve been using a bunch of random benchmark tests. And yeah, AI is scoring insanely well on some of them, like bar exams, coding, math problems, and more.
But there are also tests where current AI performs poorly. For example, the latest ARC-AGI 3 test, most frontier models, even your favorite ones, score around ~1% or even 0%.
So the point is, acing a few tests doesn’t mean you’re generally intelligent. I think general intelligence is more about being able to navigate a complex, changing world. It’s more about understanding real-world nuance.
I know it’s subjective, but it is what it is.
So, the team at DeepMind realized that to measure artificial intelligence, we have to look at the only general intelligence we currently know.. human beings.
They looked back at decades of research in psychology and neuroscience. And they decided to deconstruct human intelligence into 10 basic building blocks.
They call this the Cognitive Taxonomy.
The 10 Building Blocks of AGI
If an AI system wants to be considered an AGI, it needs to master these 10 cognitive faculties.
Perception
This is the ability to take in information from the world. For humans, it means seeing, hearing, and reading. For AI, it means processing images, audio, and text. But it’s not just about detecting light or sound. It’s about understanding what’s happening in a video or picking out clear voices in a noisy room.Generation
This simply means the ability to produce an output, right? In humans, this is speaking, hand gestures, or writing. But in machines, it means generating text, realistic audio, video, images, or controlling a robotic arm. It also includes “thought generation”, the ability to think internally before acting.Attention
Imagine you are trying to write code in your favorite coffee shop, but it’s crowded. You’re trying to focus. The ability to stay focused on your code while ignoring background noise is called attention.
AI works in a similar way. It needs to figure out what information matters for its current goal and what is just distraction.Learning
Current AI systems are mostly trained once, and then they stop learning. True AGI needs to keep learning continuously. Right now, we are still in the research phase of continual learning. AI should be able to gain new knowledge, observe how others do things, and adapt to new environments without needing a massive software update.Memory
Learning and memory go hand in hand. If learning is getting the information, memory is keeping it. An AGI needs to remember facts about the world. It also needs episodic memory.. the ability to remember a specific conversation or event from the past. And just like us, it should be able to forget outdated or incorrect information.Reasoning
This is logic. It is the ability to draw valid conclusions from a set of facts. If you tell an AI that it is raining, it should reason that the grass outside is wet, the temperature is a bit cold. This includes solving mysteries based on small clues or using analogies to understand completely new concepts.Metacognition
Metacognition means thinking about your own thinking. It is knowing what you do not know. If I ask you a question in Russian, you instantly know you cannot answer it (I assume it). An AGI needs to monitor its own errors, judge its own confidence, and correct its own mistakes in real time.Executive Functions
This is about goal-directed behavior. It includes planning a long sequence of actions to achieve a big goal. It also involves cognitive flexibility.. the ability to switch between different tasks or ways of thinking when the situation changes suddenly.Problem Solving
If you combine many of the blocks above, you get problem-solving ability. It is the ability to overcome obstacles. This means understanding a problem, retrieving the right knowledge, breaking the problem into small steps, and executing a plan. It covers math problems, coding, and even physics.. like knowing a glass will break if it falls off a table.Social Cognition
This might be the hardest one for machines. It is the ability to process social information. An AGI needs “theory of mind”.. the ability to understand human beliefs, desires, and emotions. It needs to know how to cooperate, negotiate, and recognize when someone is trying to deceive it.
Now you might be thinking, okay we have these 10 building blocks, which is great. But how do you test if an AI actually has them?
DeepMind proposes a strict three-step evaluation protocol.
Step 1: Conduct a Cognitive Assessment.
We need to give the AI a massive suite of tests covering all 10 areas. But these tests cannot just be public data. They have to be completely hidden and held-out. If an AI has already read the answers on the internet during its training, we are not testing intelligence. We are just testing memorization. The tests also need to be verified by independent third parties so companies cannot cheat.
Step 2: Collect Human Baselines.
To know if an AI is matching human intelligence, we need to know how actual humans perform on these exact same tests. DeepMind suggests testing a large, demographically representative sample of adults who have at least a high school education.
Step 3: Build Cognitive Profiles.
Once we have the AI scores and the human scores, we map them out. We can see exactly where the AI sits compared to humans.
For example, an AI might be in the 99th percentile for Reasoning and Memory. But it might be in the bottom 10 percent for Social Cognition and Metacognition.
Because of this, AI capabilities will look very uneven.
Why Aren’t We Just Testing the Core Model?
You might be wondering something. Modern AI systems are not just an algorithms anymore. They have access to web search, calculators, and coding environments.
Should we test the AI by itself, or the AI with all its tools?
DeepMind argues we have to test the entire system as a whole.
Trying to separate the core AI from its tools is becoming impossible. Besides, humans work the same way. We use tools to boost our own intelligence. Evaluating a human without letting them use a pen and paper is not a very realistic test of how they will perform in the real world.
The same goes for AI. We need to evaluate the system exactly how it will be deployed.
My Take
Intelligence is not just about getting the right answer.
DeepMind makes a very practical point in their paper. Processing speed is a massive factor.
Imagine an AI system that can perfectly drive a car or fix a complex coding bug. It might sound like AGI to some. But what if it takes the AI six hours just to decide to hit the brakes? Or three days to fix a simple line of code?
For a response to be useful in the real world, it must be timely. Speed determines actual utility.
Then there is the issue of behavior, which DeepMind calls “system propensities.”
It is not just about what an AI can do. It is about what the AI will tend to do.
How willing is the system to take risks? How aligned is it with human values? Does it prefer to ask for help when it is confused, or does it guess and hallucinate?
These behavioral traits will dictate whether an AGI is safe to deploy in our global economy.
This framework from DeepMind is just a starting point.
The science of measuring AGI is going to evolve. In the future, AI systems might develop entirely new cognitive abilities that humans do not even possess.. like perceiving data in raw binary or processing a million text documents in 2–3 seconds.
But having a scientific map is incredibly important right now.
We are moving away from subjective claims on Twitter and moving toward measurable science.
If an AI system has massive weaknesses in even one or two of these 10 cognitive faculties, it will struggle in real-world environments. It will not be a true AGI.
But as we watch AI companies check off these boxes one by one over the next few years, the reality of what we are building will become impossible to ignore.
Let me know what you think.
Which of these 10 cognitive faculties do you think AI will master first? And which one will be the hardest for a machine to truly replicate?
The next idea is already on its way, join my newsletter: https://ninzaverse.beehiiv.com/
The News Source 2.3 Million Americans Trust More Than CNN
The Flyover cuts through the noise mainstream media refuses to clear.
No spin. No agenda. Just the day's most important stories — politics, business, sports, tech, and more — delivered fast and free every morning.
Our editorial team combs hundreds of sources so you don't have to spend your morning doom-scrolling.
Join 2.3 million Americans who start their day with facts, not takes.


