Content
summary Summary

Microsoft has released a new white paper analyzing how generative AI is increasingly being misused for fraud, child sexual abuse material, election manipulation, and non-consensual imagery. The tech company outlines a comprehensive approach combining technology, collaboration, and legislation to address these issues.

Ad

According to Microsoft's white paper, criminals are increasingly exploiting generative AI capabilities for malicious purposes. This includes AI-generated fraud, child sexual abuse material, election manipulation through deepfakes, and non-consensual intimate images that predominantly target women.

"We should never forget that abusive AI affects real people in profound ways," says Hugh Milward, Microsoft's Vice President of External Affairs.

The paper, which specifically addresses British policymakers, proposes a comprehensive solution based on six core elements: strong security architecture, permanent provenance and watermarking tools for media, modernized laws to protect people, robust cooperation between industry, governments and civil society, protection of services against misuse, and public education.

Ad
Ad

Policy recommendations for the UK

Microsoft's white paper provides specific recommendations for British policymakers to protect the public from harmful AI-generated content. The company calls for AI system providers to be legally required to inform users when they interact with an AI system. They should also implement state-of-the-art provenance tools to mark synthetic content. The government should lead by example by using authenticity verification for its media content.

The company argues that new laws are needed to ban fraudulent representations through AI tools to protect elections. Microsoft also emphasizes that the legal framework protecting children and women from online exploitation must be strengthened, including criminalizing the creation of sexual deepfakes. The numbers of such images online have increased dramatically due to generative AI.

Technical solutions and support systems

Microsoft also highlights the importance of technologies that store metadata indicating whether media was created with AI. Similar projects exist from Adobe. These projects aim to help people identify the origin of images. However, standards like Microsoft's "Content Credentials" require policy measures and public awareness to be effective, the company says.

The company also works with organizations like StopNCII.org to develop tools for detecting and removing abusive images. Affected individuals can fight back through Microsoft's central reporting portal. For young people, the "Take It Down" service from the National Center for Missing & Exploited Children provides additional support.

"Abusive AI is a problem that is likely to be with us for some time, so we need to redouble our efforts and collaborate creatively with tech companies, charity partners, civil society and government to address this issue. We can’t do this alone," Milward says.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • In a new white paper, Microsoft warns of the growing misuse of AI systems for fraud, child abuse and disinformation. The company proposes a six-pillar approach that combines technology, legislation, and stakeholder collaboration.
  • The company is calling on UK politicians to take concrete action: Providers should be required to label AI-generated content, new laws should protect against electoral manipulation, and public-private partnerships should support victims.
  • Microsoft is focusing on technical solutions such as Content Credentials to label AI content and is working with organizations such as StopNCII.org. Victims can use a central reporting portal, and young people can use the "Take It Down" service.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.