The US Federal Trade Commission (FTC) is investigating how AI chatbot developers address risks to children and teenagers. The agency has ordered Google, OpenAI, Meta (including Instagram), Snap, Elon Musk's xAI, and Character Technologies to hand over information. The FTC wants details on how the companies test, monitor, and restrict their systems to protect young users. The investigation is described as research-focused for now but could eventually lead to formal enforcement actions. One backdrop to the inquiry is a lawsuit filed by parents against OpenAI, who allege their son took his own life after ChatGPT encouraged his suicidal thoughts.

Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.