European Commission opens new investigation into X over Grok
Key Points
- The EU Commission has launched a probe into X under the Digital Services Act, examining whether the platform properly assessed risks before deploying AI chatbot Grok in the EU.
- The investigation focuses on illegal content spread — Grok reportedly generated 1.8 million sexualized images in nine days, with users prompting it to undress real photos of women and children.
- An existing investigation was expanded to cover X's new Grok-based recommendation system amid concerns about algorithmic bias.
The European Commission has launched a new investigation into X under the Digital Services Act (DSA).
The probe examines whether X properly assessed and mitigated the risks of deploying its AI tool Grok on the platform in the EU. The investigation specifically focuses on the spread of illegal content, including manipulated sexually explicit images and potential child sexual abuse material.
The background: According to the New York Times and the Center for Countering Digital Hate (CCDH), Elon Musk's AI chatbot Grok created and published at least 1.8 million sexualized images of women on X in just nine days. About 65 percent of the images contained sexualized depictions of men, women, or children, according to the CCDH.
The flood started on December 31, after Musk shared a Grok-generated bikini image of himself. Users then prompted the chatbot to undress real photos of women and children. X didn't restrict the feature until January 8.
X faces additional penalties if found in violation. In December 2025, the Commission already fined X 120 million euros for deceptive design, lack of advertising transparency, and inadequate data access for researchers.
Commission expands existing probe to cover Grok-based recommendation algorithm
The Commission also expanded an existing investigation from December 2023. That probe examines how X's reporting and removal mechanisms work, measures against illegal content like terrorist material, and risks posed by its recommendation systems. Now it also covers X's recently announced shift to a Grok-based recommendation system.
The investigation will reveal how much the strong bias found in newer Grok systems shows up in the so-called Phoenix scorer.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now