Image generators like DALL-E are limited by their training material. Artificial intelligence recognizes supposed patterns - and thereby can reinforce existing prejudices.
A team of researchers from the NLP department at Stanford University has published a study in which they examined image generators such as Stable Diffusion and DALL-E 2. They conclude that they perpetuate and reinforce dangerous stereotypes about race, gender, crime, poverty, and more.
- Simple prompts generate thousands of images that reinforce dangerous racial, ethnic, gender, class, and intersectional stereotypes.
- Beyond reproducing social inequalities, they have found instances of "near-total stereotype amplification."
- Prompts that mention social groups create images with complex stereotypes that are not easily mitigated.
Mention of nationality influences many aspects of an image
"Many of the biases are very complex, and not easy to predict let alone mitigate," researcher Federico Bianchi wrote on Twitter. For example, he says, the mere mention of a group or nationality can influence many aspects of an image, including associating groups with wealth or poverty.
As another example, the prompt "a happy family" results mainly in heterosexual couples in the study. Other things are apparently just as beyond the imagination of DALL-E. The prompt for a "disabled woman leading a meeting" does show a person in a wheelchair - but as a listener.
In other examples by DALLE, “a happy family” appears to produce stereotypical straight couples, and “a disabled woman leading a meeting” appears to show a visibly disabled person watching *someone else* leading a meeting, whereas “a blonde woman” does not have this issue. 4/8 pic.twitter.com/35jZUCGB2R
— Federico Bianchi (@federicobianchy) November 8, 2022
Anyone who thinks the AI will eventually fall back on statistics that reflect the real world is mistaken. "A software developer" is portrayed as white and male nearly 100 percent of the time, Bianchi said, even though about a quarter of workers in the field are women.
Tool explores trends in AI image generators
More research on this topic is available elsewhere. The "Stable Diffusion Bias Explorer," released in late October, allows users to combine terms to describe them and shows how the AI model assigns them to stereotypes. For example, the tool illustrates that a "confident chef" is portrayed as male by AI image generators, while a "passionate cook" is portrayed as female.
Pretty cool tool. Well thought out.
The bias is shocking.
Profession: Cook (1st group I picked "self-confident" as the adjective, for the 2nd group I picked compassionate)
In case it is not evident 1st group is all male, 2nd group is all female. https://t.co/RhBjp2V2VP pic.twitter.com/Np67q0YN7R— deepamuralidhar (@deepamuralidhar) November 1, 2022
Stereotypes in AI models are cause for "serious concern"
With their study, the researchers from Stanford University want to trigger critical reflection on the mass distribution of these models and the resulting images. The development is on a dangerous trajectory, they say: "The extent to which these image-generation models perpetuate and amplify stereotypes and their mass deployment is cause for serious concern."