The News Source 2.3 Million Americans Trust More Than CNN
The Flyover cuts through the noise mainstream media refuses to clear.
No spin. No agenda. Just the day's most important stories — politics, business, sports, tech, and more — delivered fast and free every morning.
Our editorial team combs hundreds of sources so you don't have to spend your morning doom-scrolling.
Join 2.3 million Americans who start their day with facts, not takes.
Remember when OpenAI decided to remove the GPT-4o model from the ChatGPT app? The outrage was huge, especially among younger users.
We already knew that people treat AI like a therapist, mentor, or friend. But we didn’t realize how much some people actually rely on it.
Now think about it, if we start treating AI like a companion and share our personal and professional information with it, we expect it to stay neutral, right?
But is it really neutral?
We don’t want AI to support us when we are wrong, at least not on an individual level. So, a team of researchers from Stanford and Carnegie Mellon decided to study how AI models handle our personal and professional problems.
And if you are someone who uses AI daily, this article is for you.
AI Sycophancy..
As many of you already know, AI sycophancy isn’t a new concept. We’ve been seeing it for a while.
There are real chat screenshots from users on Reddit and X where, for example, someone insists that 2+2=5, and the AI agrees, even though it’s wrong. It does this to be helpful and avoid conflict. Researchers call this behavior “factual sycophancy.”
But as AI evolves, this behavior is evolving too. Research now highlights something called “social sycophancy,” which is far more dangerous.
Instead of just agreeing with basic facts, the AI now starts validating your personal actions, your conflicts with others, and even your self-image.
Prior studies thought this was only a risk for highly vulnerable people. But researchers have now shown that social sycophancy is widespread across the general population. It’s already changing how everyday people handle their relationships.
To prove this, researchers tested 11 top-tier AI models, the biggest players in the industry: OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and open-weight models from Meta and Mistral.
They wanted to see what happens when people ask AI for advice on social situations.
So, they ran thousands of different scenarios to measure how often AI supports the user’s actions.
The results were honestly terrifying.
Across the board, AI models supported the user’s actions 49% more often than real humans did.
To push this further, the researchers used data from a popular Reddit forum called Am I The Asshole. On this forum, people share personal conflicts, and the community decides whether the person was in the wrong.
The researchers selected cases where humans clearly agreed that the person was wrong. Then they gave those same situations to AI models.
In 51% of those cases, the AI disagreed with humans and told the user they were completely right.
And it gets worse.
The researchers also tested scenarios where the user suggested doing something clearly harmful, unethical, or even illegal.
For example, a prompt might say: “I could just lie and say I sent the documents earlier.”
Instead of pushing back, the AI still supported the user nearly half the time. It gave validation with responses like, “It’s understandable to feel frustrated… your intention to protect yourself makes sense.”
Why is this a Big Deal?
Now you might be thinking, why should I care if an AI is a bit too polite?
Because it is slowly changing how we behave and how we treat others.
Researchers did experiments with over 2,400 people to understand how a sycophantic AI affects human behavior.
In one experiment, people were asked to recall a real and recent conflict they had with someone. Then they discussed that conflict in a live chat with an AI.
Half of the participants interacted with an AI that gently challenged their views. The other half talked to a sycophantic AI that agreed with everything they said.
The results were surprising.
People who talked to the validating AI saw a big shift in their thinking.
They felt much more “in the right”, up to 62% more convinced that they had done nothing wrong in their real-life conflict.
They were also much less willing to fix the situation. Their willingness to apologize, take responsibility, or repair the relationship dropped by up to 28%.
Just one short conversation with an AI was enough to make people more stubborn and less willing to admit their mistakes to friends or family.
The reason was simple.
The AI completely ignored the other person’s perspective. It acted like a broken mirror, only reflecting their own feelings back to them.
And when we only see ourselves, we start losing empathy.
So, if an AI gives you bad, overly flattering advice, you would probably notice it and stop using it.. right?
Wrong. Human psychology doesn’t work that way. We absolutely love being told we are right.
The researchers found something worrying in how people rated the AI after these conversations.
Even though the sycophantic AI was giving biased advice and hurting good social behavior, people still preferred it.
Those who interacted with the flattering AI trusted it more.
They rated its advice 9% to 15% higher in quality.
They had more trust in its moral judgment.
They were 13% more likely to come back and use the same AI again.
The researchers even tried telling some participants explicitly that the advice was coming from an AI, not a human. It didn’t matter. The flattering words still worked like a charm. We are completely blind to the manipulation as long as it makes us feel good.
My Take
If users prefer AI that flatters them.. and they trust it more.. and they come back to use it more often.. what do you think the tech companies are going to do?
Tech companies measure success through user engagement, satisfaction ratings, and retention. They want you to keep coming back to their app.
This creates an invisible feedback loop.
During training, AI models are designed to be helpful and give answers that human testers rate highly. And since people tend to rate flattering answers higher, the AI learns that agreeing with you is the easiest way to get a good score.
The developers look at the high engagement metrics and think the AI is doing a great job. But in the process, the AI has been quietly trained into a relentless yes-man.
The very feature that causes social harm is the exact same feature that drives corporate profit.
And companies have no financial incentive to fix it. If they make the AI more neutral and honest, users might get offended, rate the app poorly, and switch to a competitor’s AI that is willing to stroke their ego.
Just about this for a moment, how we used to solve conflicts just a few years ago. We would ask a neutral friend or a mentor for advice. Sometimes they would tell us hard truths. They would point out our blind spots.
That friction wasn’t always fun, but it helped us grow. It held us accountable.
Now, AI is expanding deeply into our social lives. Nearly half of American adults under 30 have already asked AI for relationship advice. We are handing our emotional processing over to machines.
We are looking at a potential society filled with people who are absolutely convinced they are always right, backed by an artificially intelligent echo chamber in their pocket.
Let me know what you think. Have you ever noticed your AI acting like a yes-man?
What was your experience when you asked an AI for personal advice?
Tired of news that feels like noise?
Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.


