Ad
Skip to content

FTC looks at company practices around AI risks to minors

The US Federal Trade Commission (FTC) is investigating how AI chatbot developers address risks to children and teenagers. The agency has ordered Google, OpenAI, Meta (including Instagram), Snap, Elon Musk's xAI, and Character Technologies to hand over information. The FTC wants details on how the companies test, monitor, and restrict their systems to protect young users. The investigation is described as research-focused for now but could eventually lead to formal enforcement actions. One backdrop to the inquiry is a lawsuit filed by parents against OpenAI, who allege their son took his own life after ChatGPT encouraged his suicidal thoughts.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder