Content
summary Summary

OpenAI's latest image AI, DALL-E 3, is a big step forward because the system can better follow prompts and generate well-matched images in many styles. This adds to the existential fears of some graphic designers and artists.

OpenAI does allow artists to remove their images and graphics from the training material. However, this only applies to the training of the next model.

Moreover, this measure would only be effective if so many artists refused to use the AI models that the quality of the technology suffered significantly.

Artists have complained to Bloomberg that the opt-out process is cumbersome and resembles a "charade." OpenAI is not disclosing how many artists have complained. It's too early for that, a spokesperson says, but OpenAI is gathering feedback and looking to improve the process.

Ad
Ad

What is left for artists: Lawsuits or sabotage

Artists who want to fight generative AI for images have only two options: they can hope that international courts will uphold the numerous copyright claims and hold the model providers accountable. But the lawsuits could drag on for years, and the outcome is anyone's guess. There are no short-term solutions in sight.

A new movement is focused on sabotaging AI models. Approaches like Glaze implement invisible pixels in original images that trick AI systems into thinking the image has the wrong style. This turns a hand-drawn image into a 3D rendering, protecting the style of the original.

Nightshade, named after the highly poisonous deadly plant, works similarly. But here the manipulated pixels are actually meant to damage the model by confusing it. For example, instead of seeing a train, the AI system will see a car.

Poison Pill for AI Models

Less than 100 of these "poisoned" images can be enough to corrupt an image AI model like Stable Diffusion XL. The Nightshade team plans to implement the tool in Glaze, calling it the "last defense" against web scrapers that do not accept scraping restrictions.

Nightshade, which currently exists only as a research project, could be an effective tool for content owners to protect their intellectual property from scrapers that disregard or ignore copyright notices, do not scrape/crawl instructions, and opt-out lists, the researchers wrote. Movie studios, book publishers, game producers, and individual artists can use systems like Nightshade to create a strong deterrent against unauthorized scraping.

Recommendation

However, the use of Nightshade and similar tools can also have negative effects, including undermining the reliability of generative models and their ability to produce meaningful images.

But even if artists were to collectively sabotage training data, it would likely be difficult to have an impact. For one thing, model makers could take countermeasures, such as filtering out the corrupted files.

While it's not clear that this would work, suffice it to say that in most cases, model makers have more leeway to develop more sophisticated technology than an artists' movement or researchers who side with artists, such as the makers of Glaze and Nightshade.

The training of AI models is also likely to become more efficient as models learn to generalize better. Highly capable AI systems will then require higher quality but less data. As the quality of generated images increases, they could also be used as synthetic training material, giving model providers a way out of future licensing and copyright disputes.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI's DALL-E 3 image AI has raised concerns among artists and graphic designers about its ability to accurately follow prompts and produce well-matched images in different styles, potentially affecting their livelihoods.
  • Artists can opt out of having their work used to train future AI systems, but the process has been criticized as cumbersome. Another option could be sabotage through techniques such as Glaze and Nightshade, which manipulate images to confuse and damage Image AI models.
  • However, the impact of such sabotage may be limited, as AI model providers could develop countermeasures and improve training efficiency, possibly using high-quality generated images as synthetic training material to avoid copyright disputes.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.