Content
summary Summary

The United Kingdom plans to let companies and child protection organizations test AI models before release to see whether they can generate child sexual abuse material (CSAM).

Ad

According to a BBC report, the government aims to expand the "Crime and Policing Bill" to allow targeted evaluations of AI systems for CSAM risks. Under the proposal, authorized testers - including technology firms and child safety groups - would be permitted to check models before they go public to ensure they can't be misused to produce illegal imagery.

Tech Minister Liz Kendall said the goal is to make AI systems safe "at the source" and prevent technology from advancing faster than child protection efforts. Safeguarding Minister Jess Phillips explained that the regulations are designed to stop seemingly harmless AI tools from being turned into instruments for creating abusive content.

Child protection groups push for mandatory AI safety tests

The Internet Watch Foundation (IWF), one of the few organizations authorized to proactively search for CSAM, supports the initiative. According to the IWF, reports of AI-generated abuse imagery have surged. Between January and October 2025, the group removed 426 AI-related CSAM items, up from 199 during the same period in 2024. IWF CEO Kerry Smith warned that AI enables abusers to revictimize survivors.

Ad
Ad

The IWF had already reported a sharp rise in 2023, noting that such material also complicates investigations into real child abuse cases.

The child protection organization NSPCC also backs the government's approach but argues the tests should be mandatory, not voluntary. NSPCC policy manager Rani Govender said child safety must be built into AI development from the start, not added after the fact.

Government aims to set new standards for AI safety

The proposed changes would cover not only CSAM, but also other forms of non-consensual or violence-related content. Experts warn that large AI models trained on unfiltered internet data can be exploited to generate realistic synthetic depictions of abuse or assault, making it harder for investigators to distinguish real from artificially created material.

The Home Office has already announced that the UK will become the first country to criminalize the possession, development, and use of AI systems designed to create CSAM. Violations could be punished with up to five years in prison.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The UK government plans to extend the Crime and Policing Bill so that approved organizations can test AI models for the risk of generating child sexual abuse material before release, with the goal of making systems safe.
  • Child protection groups including the Internet Watch Foundation and the NSPCC support the plan, noting a sharp rise in AI-generated abuse imagery and calling for mandatory tests to ensure safety is built into development rather than added later.
  • The proposal is part of broader legislation that would criminalize the creation or use of AI systems designed to produce abuse content, setting a new global standard for AI safety with penalties of up to five years in prison.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.