Content
summary Summary

The European Commission has issued guidelines for very large online platform providers (VLOPs) and very large online search engines (VLOSEs) to mitigate systemic risks to electoral processes.

The guidelines include specific guidance and proposals to mitigate risks that may arise from generative AI. They are based on Regulation (EU) 2022/2065 (Digital Services Act or DSA).

Examples of generative AI risks cited by the European Commission include deceiving voters or manipulating electoral processes by creating false and misleading synthetic content about political actors, and misrepresenting events, polls, contexts, or narratives.

Generative AI systems can also produce false, incoherent, or fabricated information, affectionately called "hallucinations" in AI lingo, which can misrepresent reality and potentially mislead voters.

Ad
Ad

AI content needs (better) labeling

To mitigate the risks associated with generative AI, providers should ensure that content generated by GenAI systems is identifiable to users, for example through watermarking.

Providers should also give users standard interfaces and easy-to-use tools for tagging AI-generated content. These identifiers should be easily recognizable to users, including in advertising.

Meta announced such a feature for its social platforms just a few days ago. Most major AI companies have also agreed to the C2PA standard for image tagging.

The EU also wants providers to ensure that AI-generated information is based on reliable sources as much as possible, and to alert people to potential errors in the generated content. The generation of inaccurate content should be minimized.

In addition, public media literacy should be strengthened, and providers should engage with relevant national authorities and other local stakeholders to escalate election-related issues and discuss solutions.

Recommendation

The document emphasizes the critical role of journalists and media providers with "well-established internal editorial standards and procedures." The availability of trustworthy information from pluralistic sources is important for a functioning electoral process, the EU said.

A bad example of generative AI in the context of elections is Microsoft's Bing Chat, which was criticized for providing false information about the upcoming elections in Germany and Switzerland. The AI chatbot misled users with inaccurate poll results and false names of party candidates.

While Microsoft claims to be improving its service and says it has made significant progress in improving the accuracy of Bing Chat's (now: Copilot) responses, these cases highlight the structural problems with generative AI providing critical information.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The European Commission has issued guidelines for major online platforms and search engines to mitigate systemic risks to electoral processes, including those posed by generative AI.
  • To mitigate the risks, providers should ensure that AI-generated content is identifiable to users, for example through watermarks, and is based on reliable sources.
  • The guidelines also emphasize strengthening citizens' media literacy and working with national authorities and local stakeholders to discuss and find solutions to election-related problems.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.