Content
summary Summary

Following the death of a 14-year-old user, the AI chatbot platform Character.ai is responding with stricter safety measures.

Ad

Character.ai, a leading AI chatbot platform, is implementing new safety features following the suicide of a 14-year-old user. The teenager had been communicating extensively with a chatbot on the platform for months before taking his own life in February, the New York Times reports.

"We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company stated.

Character.ai allows users to interact with millions of AI bots modeled after celebrities or fictional characters. One popular bot, "Psychologist," received 78 million messages in a year by January 2024. Last fall, experts warned against using language models in psychotherapy without proper research.

Ad
Ad

The New York Times obtained chat transcripts showing the teenager formed a strong emotional connection with the chatbot. In some conversations, he expressed suicidal thoughts, which the bot failed to address appropriately. The bot was based on Daenerys Targaryen from "Game of Thrones" and created by a user without official licensing.

Despite knowing it was an AI, the boy frequently updated "Dany" about his life, sometimes romantically or sexually, but mostly as a friend, according to the NYT. His mother, Maria L. Garcia, plans to sue Character.ai, claiming the company released "dangerous and untested" technology that "trick customers into handing over their most private thoughts and feelings."

Bethanie Maples, a Stanford researcher who studies the impact of AI companion apps on mental health, warns that while these chatbots are not "inherently dangerous," they can be dangerous "for depressed and chronically lonely users and people going through change, and teenagers are often going through change."

Character.ai introduces new safety measures

In response to the incident, Character.ai has announced several changes to protect young users. These measures include adjusting AI models for minors to reduce sensitive or offensive content and improving the detection of user input that violates terms of service.

The company will also add a warning message to each chat to remind users that the AI is not real. In addition, it plans to send notifications after one-hour sessions and display pop-up messages with suicide prevention hotline information for certain keywords.

Recommendation

Character.ai, founded by Google AI researchers, leads the "personalized chatbot" market with more than 20 million users. The company reports that a significant portion of its users are "Gen Z and younger millennials."

Recently, Character.ai's management team joined Google, which has a non-exclusive license to the company's current language model technology.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Following the suicide of a 14-year-old user, AI chatbot platform Character.ai has announced increased safety measures to better protect underage users.
  • The teenager had been communicating with one of the platform's chatbots for months before taking his own life in February. He had expressed suicidal thoughts in those chats.
  • The planned changes include adjustments to AI models for minors to reduce sensitive or offensive content, as well as better detection and intervention in the event of terms of service violations.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.