A class action lawsuit is filed in the US against Midjourney and Stability AI as well as the art platform DeviantArt.
US artists Sarah Andersen, Kelly McKernan, and Karla Ortiz file a class action lawsuit in California against Stability AI (Stable Diffusion) and Midjourney. The artists are seeking damages and an injunction to prevent future harm.
Art platform DeviantArt is also accused of providing thousands or even millions of images from the LAION dataset for Stable Diffusion's training.
Instead of siding with the artists, DeviantArt put DreamUp online, an AI art app based on Stable Diffusion, according to the plaintiffs.
AI code lawyer now also sues against AI images
Behind the lawsuit is programmer and attorney Matthew Butterick. He is leading another lawsuit against Microsoft, Github and OpenAI, claiming that GitHub's code AI Copilot reproduces code snippets from developers without attribution and violates open source licensing terms.
On Stable Diffusion and co, Butterick delivers a harsh verdict: "It is a parasite that, if allowed to proliferate, will cause irreparable harm to artists, now and in the future."
Data sets without consent are the weak point of current image AI systems
In his brief, Butterick points out the biggest weakness - from a legal perspective - of AI systems for images: Almost no artist has given explicit consent for their works to be used to train an AI system.
Even if the images generated by the systems pass as originals - the generating system would still be based on unauthorized data.
As Butterick puts it, "because all the visual information in the system is derived from the copyrighted training images, the images produced—regardless of outward appearance—are necessarily works derived from those training images."
A recent study examined the uniqueness of images generated by an AI diffusion model and showed that relatively exact copies of original images occur regularly in the training dataset - in at least two cases out of 100.
Stability AI founder Emad Mostaque raised the prospect last November that future Stable Diffusion models could be trained on fully licensed datasets. In addition, artists are to be given opt-out mechanisms for their image data.