Elon Musk's AI chatbot Grok generated at least 1.8 million sexualized images of women and posted them on X over just nine days. That's according to the New York Times and the Center for Countering Digital Hate (CCDH), which conducted a data analysis. The CCDH estimates that roughly 65 percent of the images contained sexualized depictions of men, women, or children.
| Count in sample Out of 20,000 sampled (based on AI-assisted analysis) | Share of sample Percentage of 20,000 sampled (based on AI-assisted analysis) | Estimated Total on X Extrapolated estimate (based on overall total of 4.6m images made by Grok) |
|
|---|---|---|---|
| Sexualized Images (Adults & Children) |
12,995 | 65% | 3,002,712 |
| Sexualized Images (Likely Children) |
101 | 0.5% | 23,338 |
The flood of images started on December 31 after Musk shared a bikini picture of himself that Grok had created. Users quickly figured out they could ask the chatbot to undress or sexualize real photos of women and children. X didn't restrict the feature until January 8 and expanded those restrictions last week after authorities in the UK, India, Malaysia, and the US launched investigations.

