A guy built a $1.8 billion company with his brother and a laptop. A data analyst in Sydney designed an mRNA cancer vaccine for his dog using ChatGPT. A tech founder made his terminal cancer undetectable by treating it like a startup with AI as his co-pilot.

And the world just.. moved on. People scrolled past it. Talked about it for a day. Then went back to arguing about which AI model is better.

Sam Altman wrote something in 2025, I guess. He called it "The Gentle Singularity." And honestly, those two words have been stuck in my head ever since. Because he might be right. The singularity isn't arriving the way science fiction promised us. There's no big explosion. No single moment where everything changes. It's arriving gently. One impossible story at a time. And we're just.. adapting.

If you've been reading me for a while, you know I'm an optimist. Not the blind kind. The kind who looks at the situation, listens, understands, and then forms an opinion. I talk a lot about AI research and where this whole thing is headed. I've been doing this since last year. I share what the industry leaders are saying, what the researchers from different labs are publishing. I put on my take. You appreciate my effort and sometimes correct me when I'm wrong.

So now, I think it's the perfect time to talk about AGI. The singularity. From my point of view.

Are we chasing it right? Or is it some goal that's never going to be achievable? Or is it true that AGI is months away, like you've been seeing the buzz on X? Definitely not on Reddit though, coz Reddit people are wild. A good wild. Lol.

But here's the real question that I want to start with. The question nobody agrees on.

What's Your Definition of AGI?

This isn't a regular article where I throw timelines at you. I wanna talk to you here.

Everyone has a different definition of AGI. And I think that tells us something important about this technology that we don't talk about enough.

If we look at the last big tech wave, it was crypto, right? Crypto was subjected to a limited set of opinions. I'm not talking about right or wrong, I'm talking about the range of how people thought about it. It was about making money, it was about NFTs, it was about scams, decentralization, and maybe a handful of other things. A huge mass of people revolved around crypto, but with a relatively small set of takes.

AI is fundamentally different. And I think the reason is this: crypto was a financial tool. You could agree or disagree on its value, but the conversation had boundaries. AI is an intelligence tool. And when you start talking about intelligence itself, the conversation has no boundaries. Because everyone's relationship with intelligence is personal.

A million people in the AI space will give you a million different definitions of AGI. And that's never happened with any technology in history. That level of opinion diversity tells you something. It tells you that this technology touches something deeper than money or convenience. It touches how we think about thinking itself.

And here's what makes it even more interesting.

For some people, AGI is already here.

I've met a few people like this. They've handed over their entire workflow to a $20 Claude subscription. And they say, "Look, I don't have grand ambitions or anything. Whatever I need to do, Claude handles 80-90% of it efficiently. I don't know much about AGI, but for me? This is AGI. I spend a lot of time with my kids now."

That last line is important. Because when a technology frees up your time to be more present with your family, something real has shifted. Not at the research level. At the human level.

Now, I also completely agree that the current AI is just a next-token predictor. We're tweaking algorithms and mechanisms to make it more efficient. But the next-token prediction approach, on its own, will probably never give us true AGI. And I'm not saying this idea is crap. People like Yann LeCun, one of the greatest minds in AI, stand firmly on this.

But here's what I want to do before we try to define AGI in our own terms. Let's look at what we predicted about the future of AI just a couple of years ago.. and where we actually stand now. Because I think the gap between prediction and reality tells us more than any definition.

The First Solo Billionaire

In 2024, Sam Altman said something that caught my attention.

"In my little group chat with my tech CEO friends there's this betting pool for the first year that there is a one-person billion-dollar company. Which would have been unimaginable without AI — and now it will happen."

When he said this in 2024, most people, including smart people, thought this was years away. Some thought it was never going to happen. A billion-dollar company run by one person? That breaks everything we understand about how businesses work. You need teams, departments, layers of management, thousands of employees.

And look what just happened in 2026.

Matthew Gallagher started a company called Medvi from his house in Los Angeles. Spent $20,000 and two months. AI wrote the code. AI made the website. AI made the ads. AI handled customer service.

First month, 300 customers. Second month, 1,000 more. First full year, $401 million in sales. This year, on track for $1.8 billion.

His only hire? His younger brother. That's the entire company.

The New York Times verified the numbers. $65 million profit last year. More than $3 million coming in every single day.

Now compare this. Hims & Hers sells weight-loss drugs online. 2,442 employees. $2.4 billion revenue. 5.5% profit margin. This guy is doing nearly the same numbers with two people and triple the margins.

The stack? ChatGPT, Claude, and Grok writing code. Midjourney for images. Runway for video ads. ElevenLabs handling customer calls. Custom AI agents stitching it all together.

He grew up living in motels and cars. Taught himself to code on a laptop his uncle gave him. Sold samurai swords on eBay as a teenager. Didn't finish college. Moved to LA to become an actor.

Now he's running the fastest-growing company nobody has heard of.

When his website broke during a hike, he had to sprint home because there was nobody else to fix it. Lost 200 customers in one hour. That's the reality of a two-person company doing $1.8 billion.

A VC told him, don't raise money. He listened. Zero outside funding. He owns 100% of it.

Sam Altman literally emailed the New York Times after this story broke, saying he won the bet with his tech CEO friends, and that he "would like to meet the guy."

Two brothers. $20,000. A laptop. And every AI tool they could get their hands on.

Now, if you ask this guy, "Is this AGI for you?" You might get a yes. And honestly? I wouldn't argue with him. Because what he did was supposed to be impossible. And he did it casually. From his living room.

That's the gentle singularity. The impossible becomes normal so fast that we forget it was supposed to be impossible.

AI is Starting to Cure Diseases

We've always heard that the real test isn't money. It's whether AI can tackle the biggest hurdle of humanity. Disease. Death. The things that money alone has never been able to solve.

But here's what I find. It's not just big labs with billion-dollar budgets doing this anymore. Regular people, individuals with no biology background, are starting to fight diseases using AI. And that shift, from institutions to individuals, that's the part that should make you sit up.

Two stories went viral recently. And I think both of them are historic.

Paul Conyngham and his dog Rosie.

In 2024, Sydney-based data analyst Paul Conyngham's dog Rosie was diagnosed with terminal mast cell cancer. Given months to live. Chemotherapy failed. Surgery failed. Doctors had nothing left.

So a data analyst, not a biologist, not a doctor, not a researcher at a pharma lab, decided to fight cancer himself.

He sequenced the tumour's DNA and used ChatGPT, along with tools like AlphaFold, to analyse the genetic data. He identified neoantigen (novel proteins that form on cancer cells due to DNA mutations) targets and designed a fully personalised mRNA cancer vaccine. For his dog. He then collaborated with UNSW scientists who manufactured the vaccine in a lab. Rosie received the shots. Her tumours shrank. Her condition improved.

It was the first bespoke mRNA cancer vaccine ever made for a dog.

Just process that for a second. A guy with a laptop and an AI subscription did something that would have required an entire biotech team a decade ago.

Sid Sijbrandij and his osteosarcoma.

Sid Sijbrandij, co-founder of GitLab, was fighting osteosarcoma. Standard treatments, surgery, chemo, radiation, all failed. His cancer relapsed and there were no clinical trial options left. Doctors essentially told him there's nothing more we can do.

So he treated his cancer like a startup.

He assembled a full-time medical team, generated 25 terabytes of his own health data, built a 1,000-page personal medical handbook, and ran parallel diagnostics and experimental therapies. He used ChatGPT and other LLMs to scan thousands of research papers, organise fragmented medical data, identify overlooked treatment angles, and compress months of literature review into days.

This AI-assisted process helped design personalised neoantigen approaches, including custom vaccines derived from his tumour DNA.

His cancer is now undetectable.

A few years ago, if you told me a person would use a chatbot to help cure their own cancer, I would have said you're insane. And yet, here we are. And we moved on from that news in about 48 hours.

That's the gentle part. The singularity is so gentle that we've stopped being shocked by miracles.

AI Slop?

So let's tally this up.

Is AI making solo billionaires? Yes.

Is AI starting to help cure diseases? Yes.

Is AI impacting the economy? I would say yes.

But before you get hyped up.. no, this is not AGI.

And I think it's important to be honest about that. Because the hype machine is real, and I don't want to add to it.

What we are seeing right now are the early glimpses. The previews. We are witnessing the setup, not the main event.

Here's why I say that.

If an AI can help you build a billion-dollar company but can't hold a coherent thought beyond a million tokens of context.. that's not AGI. It's a very powerful tool. There's a difference.

True AGI means the system can think, learn, and adapt the way we do. Continuously. Without forgetting. Without breaking down when the problem gets too big or too long.

Ilya Sutskever, one of the greatest minds in AI, who hasn't launched a single product and yet his company is valued at billions, he says the same thing. We are truly in a research phase. The hard problems are still unsolved.

What Actually Needs to Happen

From my point of view, we need breakthroughs in two specific areas. And I want to explain why these two and not something else.

First: Continual learning.

Right now, AI models are frozen after training. They learn from a massive dataset, and then they stop learning. Every conversation you have with ChatGPT or Claude? The model isn't actually learning from it in real-time. It's performing, not growing.

Humans don't work like this. You learn something today and it changes how you think about something tomorrow. Your knowledge compounds. Every experience builds on the last one. That continuous growth is a core part of what makes intelligence.. intelligence.

Until AI can do this, until it can learn and grow in real-time, without needing to be retrained from scratch, we don't have AGI. We have very sophisticated pattern matching.

Second: The context problem.

This one is more technical but equally important. Current AI models have a ceiling on how much information they can hold and reason over at once. That ceiling is the context window.

When I say we need "infinite context," I don't literally mean infinite. I mean we need an algorithmic breakthrough that removes context as a bottleneck. Where AI can process and reason over a million pages of information as effortlessly as it handles a single paragraph.

Because real intelligence doesn't have a context limit. When you're solving a hard problem, you pull from everything you've ever learned. Every book, every conversation, every experience. You don't hit a wall at 200,000 tokens and start forgetting the beginning.

And here's why I pick these two and not, say, reasoning or embodiment. Reasoning is already improving fast with every new model release. Embodiment through robots will come once the intelligence is there. But continual learning and context? These are the foundation problems. Solve these, and everything else accelerates.

So When Does AGI Actually Arrive?

I think by 2028. Maybe by the end of 2028. And I'm optimistic about that timeline.

Here's a sign to watch for: before the arrival of AGI, coding will be almost completely solved by AI. When you see that happen, take it as the signal. AGI is at the door.

Why coding? Because coding is the most structured form of human reasoning that we've digitized. If AI can handle all of it, the jump to general reasoning becomes an engineering problem, not a research problem.

And the good part about engineering problems? They get solved. Sooner or later.

Till then, yeah.. industry leaders will keep hyping up AGI to sell their products. Sam will keep saying it's almost here. Others will keep saying it's decades away. The truth, as usual, is somewhere in between.

But here's what I know for sure.

The singularity has already started. It's just so gentle that most of us haven't noticed. We're already living inside it. We've been stepping into it, one miracle at a time, one impossible story at a time. And we keep walking because each step feels normal.

That's the beautiful and terrifying thing about the gentle singularity. By the time you realize you're in it, you've been in it for years.

I'm not saying this to scare you. I'm saying this because I think the gentle singularity demands a different kind of vigilance. Not panic. Not fear. But awareness. The kind where you're not just using AI, but thinking about what it means that you're using it.

Let me know what you think. What's your personal definition of AGI?

If you made it this far, you're not a casual reader. You actually think about this stuff.

So here's my ask. If this article made you think, even a little, share it with one person. Just one. Someone who's in the AI space. Someone who reads. Someone who would actually sit with these ideas instead of scrolling past them.

That's how this newsletter grows. Not through ads or algorithms. Through you sending it to someone and saying "read this."

And honestly? That means more to me than any metric.

Reply

Avatar

or to participate

Keep Reading