Content
summary Summary

Thanks to ChatGPT, Midjourney and the like, more and more AI-generated content is appearing on the web. Its machine origin is usually unknown. That is why an EU commissioner is calling for AI content to be labeled.

Ad

Věra Jourová is Vice-President for Values and Transparency at the EU Commission. She is involved in drafting an anti-disinformation code, and has now given its signatories "further homework" - including the inclusion of an explicit paragraph on how to deal with generative AI.

Google, OpenAI, Microsoft and others should label AI content

According to Jourová, generative AI services such as ChatGPT and Google Bard should not be misused by malicious actors to generate disinformation. Providers of such services would need to take appropriate safety measures.

AI-generated content would also need to be "clearly labeled" to be identifiable by users. "Freedom of speech belongs to humans, not machines," Jourová wrote on Twitter.

Ad
Ad

Companies such as Google, Microsoft and Meta, which have signed the EU code of conduct against disinformation, are expected to submit their security plans in July. Twitter, which recently withdrew from the code, will be "scrutinised vigorously and urgently."

Preventing the AI spam dystopia

Recently, AI-generated images caused a stir when a fake photo of Donald Trump created with Midjourney went viral on Twitter. In response, Midjourney introduced a new AI-based moderation system.

But there are similarly powerful open-source technologies for images and text that do not require such safeguards, although the barrier to entry is (still) higher than for commercial products.

The labeling and transparency of AI text is likely to be even more challenging. Identifying AI text based on the written word alone is controversial. Detectors, including OpenAI's own, do not have reliable recognition rates.

There is also a risk that text that is not clearly identified as human will be assumed to be AI-generated. And at what point is text considered AI-generated - 100 percent machine content, 51 percent, or is ten percent AI text enough to raise alarms?

Recommendation

Sam Altman, CEO of OpenAI, is skeptical that AI text detectors and markers will take off, although OpenAI is working on solutions. They may be useful for a transitional period, but it is impossible to build perfect detectors, he says.

Another approach is to authenticate the sender as a human rather than the content. Altman is investing in a company that records iris scans and stores them on the blockchain. Authenticating the sender would also address a broader criticism of using AI as a media tool.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • VÄ›ra Jourová, Vice-President for Values and Transparency at the EU Commission, calls for AI-generated content to be clearly labeled and not misused for disinformation. This should be ensured by including a specific section in the EU's anti-disinformation code.
  • Companies such as Google, Microsoft and Meta, which have signed the EU code of conduct against disinformation, are due to submit their AI content security plans in July. Twitter, which recently withdrew its code of conduct, will be "rigorously reviewed."
  • There are challenges with labeling and transparency of AI-generated images and text. Content authentication is controversial, and the development of perfect AI text detectors seems unlikely. One possible solution may be to authenticate the sender rather than the content itself.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.