Content
summary Summary

Image generators like DALL-E are limited by their training material. Artificial intelligence recognizes supposed patterns - and thereby can reinforce existing prejudices.

A team of researchers from the NLP department at Stanford University has published a study in which they examined image generators such as Stable Diffusion and DALL-E 2. They conclude that they perpetuate and reinforce dangerous stereotypes about race, gender, crime, poverty, and more.

  • Simple prompts generate thousands of images that reinforce dangerous racial, ethnic, gender, class, and intersectional stereotypes.
  • Beyond reproducing social inequalities, they have found instances of "near-total stereotype amplification."
  • Prompts that mention social groups create images with complex stereotypes that are not easily mitigated.

Mention of nationality influences many aspects of an image

"Many of the biases are very complex, and not easy to predict let alone mitigate," researcher Federico Bianchi wrote on Twitter. For example, he says, the mere mention of a group or nationality can influence many aspects of an image, including associating groups with wealth or poverty.

As another example, the prompt "a happy family" results mainly in heterosexual couples in the study. Other things are apparently just as beyond the imagination of DALL-E. The prompt for a "disabled woman leading a meeting" does show a person in a wheelchair - but as a listener.

Ad
Ad

Anyone who thinks the AI will eventually fall back on statistics that reflect the real world is mistaken. "A software developer" is portrayed as white and male nearly 100 percent of the time, Bianchi said, even though about a quarter of workers in the field are women.

Tool explores trends in AI image generators

More research on this topic is available elsewhere. The "Stable Diffusion Bias Explorer," released in late October, allows users to combine terms to describe them and shows how the AI model assigns them to stereotypes. For example, the tool illustrates that a "confident chef" is portrayed as male by AI image generators, while a "passionate cook" is portrayed as female.

Stereotypes in AI models are cause for "serious concern"

With their study, the researchers from Stanford University want to trigger critical reflection on the mass distribution of these models and the resulting images. The development is on a dangerous trajectory, they say: "The extent to which these image-generation models perpetuate and amplify stereotypes and their mass deployment is cause for serious concern."

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • AI models like DALL-E and Stable Diffusion are constrained by their training material.
  • As a new study shows, they reinforce stereotypes about race, gender and poverty.
  • Stanford University researchers see this as a major danger.
Jonathan works as a technology journalist who focuses primarily on how easily AI can already be used today and how it can support daily life.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.