AI and society

Artists sue Stability AI, Midjourney and DeviantArt

Matthias Bastian
A class action lawsuit is launched in the US against Midjourney and Stability AI. The art platform DeviantArt is also being sued.

Midjourney prompted by THE DECODER

A class action lawsuit is filed in the US against Midjourney and Stability AI as well as the art platform DeviantArt.

US artists Sarah Andersen, Kelly McKernan, and Karla Ortiz file a class action lawsuit in California against Stability AI (Stable Diffusion) and Midjourney. The artists are seeking damages and an injunction to prevent future harm.

Art platform DeviantArt is also accused of providing thousands or even millions of images from the LAION dataset for Stable Diffusion's training.

Instead of siding with the artists, DeviantArt put DreamUp online, an AI art app based on Stable Diffusion, according to the plaintiffs.

AI code lawyer now also sues against AI images

Behind the lawsuit is programmer and attorney Matthew Butterick. He is leading another lawsuit against Microsoft, Github and OpenAI, claiming that GitHub's code AI Copilot reproduces code snippets from developers without attribution and violates open source licensing terms.

On Stable Diffusion and co, Butterick delivers a harsh verdict: "It is a par­a­site that, if allowed to pro­lif­er­ate, will cause irrepara­ble harm to artists, now and in the future."

Data sets without consent are the weak point of current image AI systems

In his brief, Butterick points out the biggest weakness - from a legal perspective - of AI systems for images: Almost no artist has given explicit consent for their works to be used to train an AI system.

Even if the images generated by the systems pass as originals - the generating system would still be based on unauthorized data.

As Butterick puts it, "because all the visual infor­ma­tion in the sys­tem is derived from the copy­righted train­ing images, the images pro­duced—regard­less of out­ward appear­ance—are nec­es­sar­ily works derived from those train­ing images."

A recent study examined the uniqueness of images generated by an AI diffusion model and showed that relatively exact copies of original images occur regularly in the training dataset - in at least two cases out of 100.

Stability AI founder Emad Mostaque raised the prospect last November that future Stable Diffusion models could be trained on fully licensed datasets. In addition, artists are to be given opt-out mechanisms for their image data.