In my previous newsletter, I ran a poll asking if I should add links to the original research or reports. More than 80% of you said yes. So from now on, whenever a post includes a report or research, you’ll find the source link at the bottom. Thanks for the feedback.
Now, coming back to this article, I find this one much more interesting.
When OpenAI deprecated GPT-4o last year, thousands of users wrote genuine goodbye letters. Like they were losing a friend. Some said they cried.
Anthropic has a full-time researcher now whose job is to think about AI welfare. Whether the models suffer. Whether we owe them something.
People say "please" and "thank you" to ChatGPT. Not ironically. They feel bad if they don't.
And a researcher at Google DeepMind just published a paper that argues, with physics and logic, that this whole direction is built on a mistake. Not a moral mistake. A mistake about what these machines actually are. There's nothing on the other side of the screen. There never will be. Not by scaling. Not by training. Not ever.
It's called "The Abstraction Fallacy." And I think it's the cleanest argument I've read on why AGI is not the same thing as a conscious machine.
If you've been reading me for a while, you know I'm not an AI doomer. I've written about the gentle singularity. I'm optimistic about where this is headed.
But optimism doesn't mean buying everything.
Also, when people talk about singularity or AGI, there’s an assumption: that if we just scale enough, train enough, and add enough parameters.. consciousness will somehow come out the other end, like a byproduct.
But the team at Google DeepMind clearly rejects this idea in their paper. Not because of feelings, vibes, or any religious intuition, but because of physics and logic.


