Content
summary Summary

The EU Parliament wants to ban AI-generated child sexual abuse material (CSAM) as part of a new directive, citing a rapidly growing threat. The Internet Watch Foundation (IWF) has warned that AI-created abuse content is escalating at an alarming rate.

Ad

According to the IWF, the first confirmed case of AI-generated CSAM appeared in 2023. Just one year later, reports have surged by 380%, with 245 incidents in 2024 involving more than 7,600 images and videos.

The IWF says the most severe category of abuse (Category A under UK law) makes up nearly 40% of AI-generated CSAM—almost double the proportion seen in traditional cases. About 98% of this synthetic material depicts girls, a slight increase from the 97% seen across all forms of CSAM.

Offenders are now using tools like text-to-image generators and "nudify" apps, according to the IWF. The most advanced AI systems can even create hyper-realistic short videos.

Ad
Ad

"What we're seeing now is highly realistic abuse imagery being generated with minimal technical skill. This technology is being exploited to cause real harm to children," said Dan Sexton, the IWF's Chief Technology Officer. The report also highlights that in the most disturbing cases, AI models are being trained on real abuse images.

EU Parliament pushes for strict ban, criticizes Council proposal

EU law currently lacks explicit rules on synthetic abuse material. The EU Parliament has taken a firm stance in the proposed Child Sexual Abuse Directive (CSAD). Lawmakers want to fully criminalize AI-CSAM, including possession for "personal use," and reject any exceptions. They are also calling for clearer definitions, better detection tools, and stronger cross-border cooperation for police and child protection agencies.

The EU Council's current position is under heavy fire. Its draft version of the CSAD would allow people to possess AI-generated abuse images for "personal use," something the IWF calls a "deeply concerning loophole."

The IWF and its partners are pushing to close this gap, arguing there is no such thing as harmless abuse material. Another challenge: AI-generated CSAM can make it harder to identify real cases of abuse.

Alongside the push for a full ban, the organization is also demanding an EU-wide prohibition on guides, instructions, and models used to create CSAM, plus better support for survivors. The new directive is still under negotiation.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The Internet Watch Foundation (IWF) reports a 380% increase in cases of AI-generated abuse material within one year, logging 7,644 new images and videos in 2024 alone.
  • Most content involves the most severe forms of abuse and almost exclusively targets girls, with offenders relying on easily accessible AI tools like text-to-image generators and "nudifying" apps to produce highly realistic material.
  • The EU Parliament is pushing for a total ban on AI-generated abuse material, including possession for personal use, while the EU Council's proposal allows exceptions for personal use—a stance that has drawn criticism from the IWF.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.