Content
summary Summary

The Internet Watch Foundation (IWF), a nonprofit organization focused on removing child sexual abuse material (CSAM) from the Internet, reports a rise in AI-generated CSAM.

In one month, IWF analysts found 20,254 AI-generated images on a single CSAM forum on the dark web. Of these, 11,108 were considered potentially criminal and were analyzed by 12 IWF specialists who spent a total of 87.5 hours on them.

The criminal images were assessed as violating one of two UK laws: the Protection of Children Act 1978 or the Coroners and Justice Act 2009. A total of 2,562 images were classified as criminal pseudo-images and 416 images were classified as criminal prohibited images.

The IMF says this content can be created using unrestricted, open-source text-to-image systems, where typing a description generates realistic images that are virtually indistinguishable from real photographs.

Ad
Ad

A niche that could grow quickly

The report also highlights other findings, including that AI-generated content is currently a small part of the IWF's work, but has the potential for rapid growth.

AI-generated CSAMs are also becoming more realistic, the report says, posing challenges for the IWF and law enforcement. There is also evidence that AI-generated CSAMs contribute to the re-victimization of known abuse victims and celebrity children. In addition, AI-generated CSAMs provide another avenue for offenders to profit from child abuse.

The IWF outlined a series of recommendations for governments, law enforcement, and technology companies to address the growing problem of AI-generated CSAMs.

These include promoting international coordination on content handling, reviewing online content removal laws, updating police training to cover AI-generated CSAM, regulatory oversight of AI models, and ensuring that companies developing and deploying generative AI and large language models (LLMs) include prohibitions on the generation of CSAM in their terms of service.

Ultimately, the growth of AI-generated CSAMs poses a significant threat to the IWF's mission to remove child pornography from the Internet, the IWF said. As the technology advances, the images will become more realistic and child abuse could increase.

Recommendation

In late June, the National Center for Missing and Exploited Children in the US reported a "sharp increase" in AI-generated abuse images. The images would complicate investigations and could hinder the identification of victims.

On pedophile forums, users shared instructions for open-source models to generate the images. Child advocates and U.S. justice officials say these are punishable by law, but there have been no court rulings on classification or sentencing.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The Internet Watch Foundation (IWF) reports a rise in AI-generated child sexual abuse material (CSAM) online, with more than 20,000 such images found on a dark web forum in one month.
  • AI-generated CSAM is becoming increasingly real, challenging the work of the IWF and law enforcement, especially as the volume increases.
  • The IWF recommends that governments, law enforcement, and technology companies seek international coordination on how to deal with such content, review laws to remove it online, and ensure regulatory oversight of AI models.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.