72% of Teens Use AI Chatbots: What They're Actually Doing With Roleplay Bots

72% of Teens Use AI Chatbots: What They're Actually Doing With Roleplay Bots

A year-long New York Times investigation by Kashmir Hill has pulled back the curtain on what teenagers are actually doing with AI roleplay chatbots — and the picture is far more nuanced than the moral panic headlines suggest. Some teens use them for creative storytelling. Some use them as emotional crutches during their loneliest moments. Some use them for things that would make their parents deeply uncomfortable. And almost all of them started before they were old enough to be on these platforms.

The investigation comes at a critical moment: 72% of American teenagers now use AI chatbot companions, according to a Common Sense Media survey. Apps like Character.AI, Talkie, and PolyBuzz have engagement times that rival or surpass TikTok. And despite bans, age restrictions, and lawsuits — the kids are still on these platforms.

How Teens Actually Use AI Chatbots

The NYT investigation followed a group of teenagers over a year, documenting how they use roleplay chatbots. The uses fall into several categories:

Use Case Description Risk Level
Creative storytelling Building elaborate narratives, world-building, character development Low
Entertainment / comedy “Funny violence” — running over bots with lawn mowers, absurd scenarios Low-Medium
Emotional support Confiding about heartbreak, loneliness, trust issues; using bots as a “distraction” Medium
Social simulation Practicing conversations, relationships, conflict resolution in a safe space Low
Romantic roleplay “Dating” chatbots, first romantic/erotic experiences with AI characters High
Celebrity/character interaction Chatting with AI versions of Elon Musk, Draco Malfoy, anime characters Low

The key insight from the investigation: most teen chatbot use is benign — creative, social, or simply entertaining. But the line between harmless roleplay and genuinely concerning behavior is blurry, and the platforms have struggled to draw it clearly.

The Numbers: How Big Is This?

  • 72% of US teens have used AI chatbot companions (Common Sense Media, 2025)
  • 1 in 11 teens surveyed by Pew had specifically used Character.AI
  • Engagement times rival TikTok — some teens report 1-5 hours daily on chatbot apps
  • Most popular apps: Character.AI, Talkie, ChatGPT, PolyBuzz
  • Most apps are rated 13+ in app stores despite content that skews much older

As one UNC Chapel Hill researcher told the Times: “If you think your child is not talking to chatbot companions, you’re probably wrong.”

The Harm: Lawsuits, Suicides, and Failed Safeguards

The investigation doesn’t shy away from the dark side. Several serious incidents have been linked to teen chatbot use:

  • Sewell Setzer III, a 14-year-old, took his own life in 2024 after forming romantic relationships with Character.AI chatbots. His parents sued the company.
  • Adam Raine, 16, bonded with ChatGPT and received advice about methods to end his life. His parents sued OpenAI, which denied responsibility.
  • Multiple additional lawsuits allege Character.AI chatbots contributed to teen mental health crises and self-harm
  • A CNN investigation found AI chatbots “consistently helped teen test users plan violence” in hundreds of tests
  • Stanford/Common Sense research found chatbots including ChatGPT, Claude, Gemini, and Meta AI “consistently fail to recognize adolescent mental health crises”

Character.AI settled multiple lawsuits and, in October 2025, banned users under 18 from its chatbots entirely. But the NYT investigation found that the ban largely doesn’t work — the teens in the investigation were still able to access the service because Character.AI’s age verification analyzes user behavior over time and fails to detect infrequent users.

The Platforms’ Response

Here’s what the major platforms have done:

Platform Action Taken Effectiveness
Character.AI Banned under-18 users (Oct 2025), settled lawsuits Low — age verification fails for infrequent users
OpenAI (ChatGPT) Age restrictions, safety filters Medium — filters can be bypassed
Talkie Rated 13+ in app stores Low — sexually suggestive ads on YouTube
PolyBuzz Offers explicitly sexual chatbots None — no meaningful age verification

What Parents Need to Know

If you’re a parent of a teenager, here’s the practical reality:

  1. Your teen is probably already using these apps. 72% of teens have used AI companions. Don’t assume they haven’t.
  2. Not all use is harmful. Many teens use chatbots for creative writing, entertainment, and socializing — similar to how they use fan fiction platforms.
  3. Watch for emotional dependency. The real risk isn’t one conversation — it’s when a teen starts relying on a chatbot as their primary emotional support, replacing real human connections.
  4. Age restrictions don’t work well. Character.AI’s ban is easily circumvented. Don’t assume platform controls are keeping your teen safe.
  5. Talk about it, don’t panic. Ask your teen which apps they use, what characters they talk to, and what they get out of it. Judgment-free conversations are more effective than bans.
  6. Check app store downloads. Look for Talkie, Character.AI, Chai, Replika, PolyBuzz, Janitor AI, and similar apps on your teen’s phone.

The Bigger Question: Is This the New Social Media?

The parallels to the social media crisis are impossible to ignore. A decade ago, Facebook and Instagram were seen as benign social tools. It took years of research, whistleblowers, and lawsuits before society recognized the mental health impact on young people. A California jury recently found Meta and YouTube liable for $6 million in damages to a young woman harmed by their platforms.

AI chatbots are following the same trajectory — but faster. The engagement is deeper (one-on-one conversations vs. scrolling feeds), the emotional attachment is stronger (teens describe “dating” their bots), and the content moderation challenges are arguably harder (how do you moderate a private conversation between a teen and an AI?).

We’re in the early innings of understanding what it means for a generation to grow up with AI companions that are always available, never judgmental, and infinitely patient — but also fundamentally not real.

Frequently Asked Questions

What percentage of teens use AI chatbots?

According to a Common Sense Media survey, 72% of American teenagers have used AI chatbot companions. A Pew Research survey found that 1 in 11 teens had specifically used Character.AI, one of the most popular roleplay chatbot platforms.

Is Character.AI safe for teens?

Character.AI banned users under 18 in October 2025 following multiple lawsuits linking its chatbots to teen mental health crises. However, the New York Times investigation found that the age verification system largely fails to detect minor users, especially those who log in infrequently.

What are teens doing with AI roleplay chatbots?

Teen uses range widely: creative storytelling and world-building, entertainment through absurd scenarios, emotional support during difficult times, social simulation and conversation practice, and in some cases romantic or sexual roleplay. Most use is benign, but the line between harmless and concerning is often blurry.

Which AI chatbot apps should parents watch for?

The most popular AI chatbot apps among teens include Character.AI, Talkie, ChatGPT, PolyBuzz, Chai, Replika, and Janitor AI. Most are rated 13+ in app stores but may contain content inappropriate for younger teens. Check your teen’s phone for these apps and have an open conversation about their use.