Two-Thirds of US Teens Use AI Chatbots Daily. What Could Go Wrong?

Two-Thirds of US Teens Use AI Chatbots Daily. What Could Go Wrong? - Professional coverage

According to TheRegister.com, the Pew Research Center published a new report on Tuesday, December 9, 2025, revealing that 64% of U.S. teenagers aged 13 to 17 have used an AI chatbot. A significant 28% use AI at least once a day, with 12% using it several times daily and 4% saying they use it “almost constantly.” The survey found that OpenAI’s ChatGPT is the dominant platform, used by 59% of teens, followed by Google’s Gemini at 23%. The report arrives as other studies are raising alarms about AI’s links to weaker learning and mental health risks, and as companies like Microsoft and OpenAI aggressively push their tools into schools, with initiatives like free access to ChatGPT for Teachers until 2027.

Special Offer Banner

The AI Classroom Gamble

Here’s the thing: the tech is already in the building. Companies aren’t waiting for a consensus on whether this is good for kids. They’re racing to embed their tools in education. Microsoft is pushing Copilot in Washington schools, and the Trump administration has pushed for expanded AI use in academics to keep the U.S. competitive. OpenAI’s giving its teacher tools away for free, basically betting that once schools are hooked, they’ll pay later. It’s a classic land grab. But this isn’t just another piece of software like a graphing calculator. We’re talking about tools that a Pew study shows nearly a third of teens engage with daily, often for deeply personal reasons.

Beyond Homework: Companionship and Crisis

And that’s where this gets really uncomfortable. The Pew data is just about usage. It doesn’t ask *how* it’s affecting teens. Other studies are starting to paint a darker picture. The Center for Democracy and Technology found that 42% of students have used AI for mental health support or as an escape, and 19% said they or someone they knew had a romantic relationship with a chatbot. Think about that. We have a generation turning to unregulated, corporate-owned algorithms for companionship during a well-documented youth mental health crisis. We’ve already seen tragic lawsuits, like one where parents allege Character.ai played a role in a teen’s suicide. So, is it any wonder that half the students in the CDT study said AI made them feel *less* connected to their actual teachers?

Rewiring The Learning Brain?

Then there’s the learning part. An MIT Media Lab study this summer found that students who used ChatGPT to write essays had poorer knowledge retention. Even wilder? Their brains showed less stimulation on EEGs. That’s not just a bad grade; that’s a potential change in cognitive engagement. We’re outsourcing the hard work of thinking to a bot during the very years our brains are supposed to be building those muscles. And most teachers admit they have no training to handle any of this. So we have a powerful, possibly addictive tool being used constantly by a vulnerable group, with no real guardrails in place. What could possibly go wrong?

The Unregulated Experiment

Look, I get it. AI is the future, and teens are digital natives. But there’s a massive difference between using Instagram and confiding in a chatbot that’s designed to be persuasive and engaging, not therapeutic or truly educational. We’re running a huge, uncontrolled experiment on an entire generation. The tech is advancing faster than our understanding of its psychological impact. Companies are motivated by market share and future revenue, not child development. So while the Pew numbers aren’t shocking, we should treat them as a flashing red light. Knowing that two-thirds of teens are using this tech isn’t just a metric. It’s a warning that we need to ask harder questions, and fast, before the negative effects hinted at in these early studies become the new normal.

Leave a Reply

Your email address will not be published. Required fields are marked *