Content
summary Summary

A new report from Internet Matters finds that children are turning to AI chatbots for emotional support and help with schoolwork, raising concerns about weak safeguards, misinformation, and emotional dependency - especially for vulnerable kids.

Ad

The report, titled "Me, myself & AI," shows that AI chatbots have become a regular part of kids' digital lives. Based on surveys, focus groups, and user testing, the study found that 64 percent of children and teens ages 9 to 17 have used AI chatbots. Usage of services like ChatGPT has nearly doubled over the past 18 months.

According to the survey, ChatGPT is the most popular tool (43 percent), followed by Google Gemini (32 percent) and Snapchat's My AI (31 percent). Kids are using these tools for more than just information - increasingly, they're turning to chatbots for advice and even as replacements for friends. The report warns that this trend not only amplifies existing online risks but also creates new ones, while safety measures from providers haven't kept up.

Vulnerable kids seek comfort and friendship from AI

The study found that vulnerable children - those with special educational needs or health challenges - rely on chatbots even more for emotional support. This group uses chatbots at higher rates (71 percent compared to 62 percent of their peers) and is nearly three times more likely to turn to companion AIs like Character.AI or Replika.

Ad
Ad

For many, the reasons are emotional. Nearly a quarter (23 percent) of vulnerable kids said they use chatbots because they have no one else to talk to, while 16 percent said they were looking for a friend. Half of these users described chatting with AI as "like talking to a friend." This bond shows up in the way kids talk about chatbots, often using gendered pronouns like "he" or "she."

Chatbots are also popular for schoolwork. Nearly half (47 percent) of 15- to 17-year-olds use them for studying, writing essays, or learning languages. Many see them as faster and more helpful than traditional study tools.

But over-reliance is a risk. Fifty-eight percent of kids who use chatbots believe the bot gives better answers than searching on their own. The report's authors warn this could encourage passive learning and hurt kids' critical thinking skills.

Unfiltered advice and weak age checks

About a quarter (23 percent) of kids who use chatbots have asked for advice, ranging from everyday questions to mental health concerns. Trust in chatbot answers runs high - 40 percent say they have no concerns about following the advice, a figure that rises to 50 percent among vulnerable kids.

Internet Matters' user tests, however, found chatbots can give inconsistent or even dangerous responses. In one case, a bot on Character.AI gave weight loss tips before being stopped by a filter.

Recommendation

A major problem is the lack of effective age verification. Most platforms set the minimum age at 13, but 58 percent of 9- to 12-year-olds said they use them anyway. Kids can bypass filters by entering a false age. Testers even found user-created chatbots on Character.AI called "Filter Bypass" that are explicitly designed to get around safety features.

Parents and schools struggle to keep up

The report concludes that kids are largely on their own when navigating these technologies. While most parents (78 percent) have talked to their kids about AI, these conversations usually stay on the surface. Sixty-two percent of parents worry about the accuracy of AI-generated information, but just 34 percent have discussed how to check if something is true.

Schools aren't filling the gap either. Only 57 percent of kids said AI had been discussed at school, and advice from teachers was often inconsistent. Deeper issues, like AI bias, rarely come up. Internet Matters is calling for coordinated action from industry, government, and schools to better protect children and build real AI literacy.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A report from Internet Matters finds that 64 percent of children and teens aged 9 to 17 have used AI chatbots, with usage nearly doubling in the last 18 months. ChatGPT is the most popular, and kids are turning to these tools for both schoolwork and emotional support.
  • Vulnerable children, including those with special educational needs or health challenges, are more likely to seek comfort from chatbots, sometimes using them as a substitute for friendship. Over half of young users believe chatbots provide better answers than searching independently, raising concerns about critical thinking and emotional dependency.
  • The report highlights weak age checks and inconsistent safety measures, noting that many children under 13 use chatbots by bypassing filters. Parents and schools struggle to provide guidance, and Internet Matters is calling for action to improve safeguards and teach children how to use AI safely.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.