Content
summary Summary

AI content is currently in the Wild West phase - you generate what works. There are no transparency rules. China and OpenAI are working on countermeasures.

In addition to the issue of copyright on training content for AI models, another fundamental question is troubling the AI industry: What about all the AI content that is barely recognizable as such (images) or is no longer recognizable (texts)?

What consequences could this lack of transparency in AI content have for society? Are we facing information clogging and mass layoffs in the media industry? Are essays and term papers dead? Is a fake news and spam wave rolling toward us? Of course, all of these problems already exist. But AI automation could take scaling to a new level.

In order for our society to be able to make a conscious decision at all and regulate these and similar questions, we first need transparency. Behind which work is a human being, behind which is a machine? Without this transparency, attempts at regulation will have a hard time.

Ad
Ad

China bans AI media without watermark

The Chinese Cyberspace authority, which among other things regulates and censors the Internet in China, bans the creation of AI media without watermarks. This new rule will take effect from January 10, 2023.

The authority speaks of the dangers posed by "deep synthesis technology," which, while meeting user needs and enhancing the user experience, is also abused to spread illegal and harmful information, damage reputations, and forge identities.

These scams would endanger national security and social stability, according to a statement from the authority. New products in this segment must first be evaluated and approved by the authority.

The authority stresses the importance of watermarks that identify AI content as such, while not restricting the function of the software. These watermarks must not be deleted, manipulated or hidden. Users of the AI software must register for accounts using their real names, and their generations must be traceable, says the authority.

OpenAI explores detection systems for AI texts

Unlabeled AI-generated texts in particular could pose new challenges for society. One example is the education system, which has partly feared the death of homework since the introduction of ChatGPT.

Recommendation

And rightly so: large language models like ChatGPT are particularly good at reproducing frequently written down and clearly documented knowledge in new words in a compact, understandable and largely error-free way. They are therefore tailor-made for school assignments, which are usually based on existing, relatively basic knowledge.

Other examples of the potentially harmful use of AI texts include sophisticated spam or the mass distribution of fraudulent content and propaganda on fake websites or social media profiles. All of this is already happening, but large language models could increase the quality and volume of this content.

OpenAI, the company behind ChatGPT and GPT-3, is therefore working on making AI-generated content detectable through technical, statistical marking. The company is aiming for a future in which it will be much harder to pass off an AI-generated text as written by a human.

The company is experimenting with a server-level cryptographic wrapper for AI text that can be recognized as a watermark via a key. The same key is used as a watermark and for the authenticity check.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

"Empirically, a few hundred tokens seem to be enough to get a reasonable signal that yes, this text came from [an AI system]. In principle, you could even take a long text and isolate which parts probably came from [the system] and which parts probably didn’t," says University of Texas computer science professor Scott Aaronson, who is currently a visiting researcher at OpenAI working on the system.

OpenAI's researchers plan to present this system in more detail in a paper in the coming months. It is also just one of the detection techniques currently being researched, the company says.

But even if OpenAI or another company succeeds in implementing a working detection mechanism and the industry can agree on a standard, it probably won't solve the AI transparency problem once and for all.

Stable Diffusion shows that generative open source AI can compete with commercial offerings. This could also apply to language models. In addition to labeling AI-generated content, an authentication system for human authorship might also be required in the future.

This text was written entirely by a human (Matthias Bastian).

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • AI content is currently in the Wild West phase - you generate what works. There are no transparency rules.
  • The education system, for example, is feeling the effects of this, as it can no longer assume human authorship for essays.
  • China and OpenAI want to provide more transparency for AI content with the help of watermarks. OpenAI is working on a technical solution.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.