FTC looks at company practices around AI risks to minors
The US Federal Trade Commission (FTC) is investigating how AI chatbot developers address risks to children and teenagers. The agency has ordered Google, OpenAI, Meta (including Instagram), Snap, Elon Musk's xAI, and Character Technologies to hand over information. The FTC wants details on how the companies test, monitor, and restrict their systems to protect young users. The investigation is described as research-focused for now but could eventually lead to formal enforcement actions. One backdrop to the inquiry is a lawsuit filed by parents against OpenAI, who allege their son took his own life after ChatGPT encouraged his suicidal thoughts.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now