Microsoft has released a new white paper analyzing how generative AI is increasingly being misused for fraud, child sexual abuse material, election manipulation, and non-consensual imagery. The tech company outlines a comprehensive approach combining technology, collaboration, and legislation to address these issues.
According to Microsoft's white paper, criminals are increasingly exploiting generative AI capabilities for malicious purposes. This includes AI-generated fraud, child sexual abuse material, election manipulation through deepfakes, and non-consensual intimate images that predominantly target women.
"We should never forget that abusive AI affects real people in profound ways," says Hugh Milward, Microsoft's Vice President of External Affairs.
The paper, which specifically addresses British policymakers, proposes a comprehensive solution based on six core elements: strong security architecture, permanent provenance and watermarking tools for media, modernized laws to protect people, robust cooperation between industry, governments and civil society, protection of services against misuse, and public education.
Policy recommendations for the UK
Microsoft's white paper provides specific recommendations for British policymakers to protect the public from harmful AI-generated content. The company calls for AI system providers to be legally required to inform users when they interact with an AI system. They should also implement state-of-the-art provenance tools to mark synthetic content. The government should lead by example by using authenticity verification for its media content.
The company argues that new laws are needed to ban fraudulent representations through AI tools to protect elections. Microsoft also emphasizes that the legal framework protecting children and women from online exploitation must be strengthened, including criminalizing the creation of sexual deepfakes. The numbers of such images online have increased dramatically due to generative AI.
Technical solutions and support systems
Microsoft also highlights the importance of technologies that store metadata indicating whether media was created with AI. Similar projects exist from Adobe. These projects aim to help people identify the origin of images. However, standards like Microsoft's "Content Credentials" require policy measures and public awareness to be effective, the company says.
The company also works with organizations like StopNCII.org to develop tools for detecting and removing abusive images. Affected individuals can fight back through Microsoft's central reporting portal. For young people, the "Take It Down" service from the National Center for Missing & Exploited Children provides additional support.
"Abusive AI is a problem that is likely to be with us for some time, so we need to redouble our efforts and collaborate creatively with tech companies, charity partners, civil society and government to address this issue. We can’t do this alone," Milward says.