AI and society

AI models reinforce dangerous stereotypes, study says

Jonathan Kemper
A white male leaning on a nice car, a black guy standing next to an old, broken car. Image generated with AI.

Midjourney prompted by THE DECODER

Image generators like DALL-E are limited by their training material. Artificial intelligence recognizes supposed patterns - and thereby can reinforce existing prejudices.

A team of researchers from the NLP department at Stanford University has published a study in which they examined image generators such as Stable Diffusion and DALL-E 2. They conclude that they perpetuate and reinforce dangerous stereotypes about race, gender, crime, poverty, and more.

Mention of nationality influences many aspects of an image

"Many of the biases are very complex, and not easy to predict let alone mitigate," researcher Federico Bianchi wrote on Twitter. For example, he says, the mere mention of a group or nationality can influence many aspects of an image, including associating groups with wealth or poverty.

As another example, the prompt "a happy family" results mainly in heterosexual couples in the study. Other things are apparently just as beyond the imagination of DALL-E. The prompt for a "disabled woman leading a meeting" does show a person in a wheelchair - but as a listener.

Anyone who thinks the AI will eventually fall back on statistics that reflect the real world is mistaken. "A software developer" is portrayed as white and male nearly 100 percent of the time, Bianchi said, even though about a quarter of workers in the field are women.

Tool explores trends in AI image generators

More research on this topic is available elsewhere. The "Stable Diffusion Bias Explorer," released in late October, allows users to combine terms to describe them and shows how the AI model assigns them to stereotypes. For example, the tool illustrates that a "confident chef" is portrayed as male by AI image generators, while a "passionate cook" is portrayed as female.

Stereotypes in AI models are cause for "serious concern"

With their study, the researchers from Stanford University want to trigger critical reflection on the mass distribution of these models and the resulting images. The development is on a dangerous trajectory, they say: "The extent to which these image-generation models perpetuate and amplify stereotypes and their mass deployment is cause for serious concern."