A new report from Internet Matters finds that children are turning to AI chatbots for emotional support and help with schoolwork, raising concerns about weak safeguards, misinformation, and emotional dependency - especially for vulnerable kids.
The report, titled "Me, myself & AI," shows that AI chatbots have become a regular part of kids' digital lives. Based on surveys, focus groups, and user testing, the study found that 64 percent of children and teens ages 9 to 17 have used AI chatbots. Usage of services like ChatGPT has nearly doubled over the past 18 months.
According to the survey, ChatGPT is the most popular tool (43 percent), followed by Google Gemini (32 percent) and Snapchat's My AI (31 percent). Kids are using these tools for more than just information - increasingly, they're turning to chatbots for advice and even as replacements for friends. The report warns that this trend not only amplifies existing online risks but also creates new ones, while safety measures from providers haven't kept up.
Vulnerable kids seek comfort and friendship from AI
The study found that vulnerable children - those with special educational needs or health challenges - rely on chatbots even more for emotional support. This group uses chatbots at higher rates (71 percent compared to 62 percent of their peers) and is nearly three times more likely to turn to companion AIs like Character.AI or Replika.
For many, the reasons are emotional. Nearly a quarter (23 percent) of vulnerable kids said they use chatbots because they have no one else to talk to, while 16 percent said they were looking for a friend. Half of these users described chatting with AI as "like talking to a friend." This bond shows up in the way kids talk about chatbots, often using gendered pronouns like "he" or "she."
Chatbots are also popular for schoolwork. Nearly half (47 percent) of 15- to 17-year-olds use them for studying, writing essays, or learning languages. Many see them as faster and more helpful than traditional study tools.
But over-reliance is a risk. Fifty-eight percent of kids who use chatbots believe the bot gives better answers than searching on their own. The report's authors warn this could encourage passive learning and hurt kids' critical thinking skills.
Unfiltered advice and weak age checks
About a quarter (23 percent) of kids who use chatbots have asked for advice, ranging from everyday questions to mental health concerns. Trust in chatbot answers runs high - 40 percent say they have no concerns about following the advice, a figure that rises to 50 percent among vulnerable kids.
Internet Matters' user tests, however, found chatbots can give inconsistent or even dangerous responses. In one case, a bot on Character.AI gave weight loss tips before being stopped by a filter.
A major problem is the lack of effective age verification. Most platforms set the minimum age at 13, but 58 percent of 9- to 12-year-olds said they use them anyway. Kids can bypass filters by entering a false age. Testers even found user-created chatbots on Character.AI called "Filter Bypass" that are explicitly designed to get around safety features.
Parents and schools struggle to keep up
The report concludes that kids are largely on their own when navigating these technologies. While most parents (78 percent) have talked to their kids about AI, these conversations usually stay on the surface. Sixty-two percent of parents worry about the accuracy of AI-generated information, but just 34 percent have discussed how to check if something is true.
Schools aren't filling the gap either. Only 57 percent of kids said AI had been discussed at school, and advice from teachers was often inconsistent. Deeper issues, like AI bias, rarely come up. Internet Matters is calling for coordinated action from industry, government, and schools to better protect children and build real AI literacy.