Ad
Skip to content

FTC looks at company practices around AI risks to minors

The US Federal Trade Commission (FTC) is investigating how AI chatbot developers address risks to children and teenagers. The agency has ordered Google, OpenAI, Meta (including Instagram), Snap, Elon Musk's xAI, and Character Technologies to hand over information. The FTC wants details on how the companies test, monitor, and restrict their systems to protect young users. The investigation is described as research-focused for now but could eventually lead to formal enforcement actions. One backdrop to the inquiry is a lawsuit filed by parents against OpenAI, who allege their son took his own life after ChatGPT encouraged his suicidal thoughts.

AI News Without the Hype – Curated by Humans

Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section.

AI news without the hype
Curated by humans.

  • More than 16% discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder