AI in practice

AI-generated child abuse reports delay investigations

Matthias Bastian
An abstract digital art piece showcasing a variety of larger silhouetted figures representing children and teenagers, depicted in a highly stylized, silhouette form. The figures are deliberately oversized and dominate the canvas, emphasizing their presence within a digital data stream. The background zooms further into the vibrant red and blue digital glitch patterns, with dense vertical streaks symbolizing streams of data and signal noise. Each figure is reduced to its most basic, recognizable shape, lacking detail but clearly suggestive of youth. Around them, digital identifiers and tags float with a subtle glow, against the reflective surface that's intricately patterned with binary code, creating a powerful visual metaphor for the digital age

Midjourney prompted by THE DECODER

Using AI to detect and report child abuse material on social media platforms can delay investigations

Social media companies like Meta are using AI to detect and report suspicious material, but US law enforcement and the National Center for Missing and Exploited Children (NCMEC) can only open these reports with a warrant, reports The Guardian.

The only exception would be if the content has been reviewed by someone at the social media company.

Obtaining a search warrant can take days or weeks, resulting in delays in the investigation and possible loss of evidence.

In addition, automatically generated reports often lack the specific information needed to obtain a search warrant.

According to NCMEC, Meta sends by far the largest number of these AI-generated reports. In 2022, more than 27 million, or 84 percent, of all reports were generated through the Facebook, Instagram, and WhatsApp platforms. In total, the organization received 32 million regular and AI-generated reports from various sources.

AI delays child abuse investigations

The reliance on a search warrant leads to a backlog of AI-generated reports that are not followed up, putting pressure on already overburdened law enforcement teams. As a result, many AI-generated leads go unaddressed, according to The Guardian.

The warrant requirement for AI-generated leads is tied to privacy protections in the United States. The Fourth Amendment prohibits unreasonable government searches.

In addition to AI-generated reports, AI-generated images of child abuse can further slow down investigations.

In October 2023, the Internet Watch Foundation (IWF) warned about the growing amount of AI-generated child pornography content. The proliferation of synthetic but photo-realistic images of child abuse poses a "significant threat" to the IWF's mission of removing such content from the Internet.

The nonprofit Thorn Group for Child Safety also fears that the volume of synthetic images will hinder the search for victims and the fight against actual child abuse. The number of images is overwhelming the existing investigative system, which has to figure out which images are real and which are fake. Both organizations report a sharp increase in such images.

Sources: