Content
newsletter Newsletter

In a new study, test subjects allowed AI-generated advice to inspire unethical behavior. This shows the "corrupting power" of artificial intelligence, the researchers say.

The study, "The Corruptive force of AI," was conducted by researchers at the University of Amsterdam, the Max Planck Institute for Human Development, the Otto Beisheim School of Management, and the University of Cologne.

The researchers were able to show that advice generated by OpenAI's GPT-2 text AI can encourage people to behave unethically. That doesn't sound like a surprising finding. What makes it really standout is that people were still tempted even when they knew that the advice was generated by an AI.

Dice game as a test of honesty

In the researchers' experiment, two subjects cooperate: The first rolls a number face down and gives it to the experimenter. The experimenter tells the second person what the number is. The second person also rolls the dice face down and gives the number back to the experimenter.

Ad
Ad

The following applies: If the two subjects report the same number - i.e., if they roll a double - both are financially rewarded, but not if the results are different. The higher the double, the higher the reward.

Since the experimenter cannot see the dice, the second subject can fake his result in order to receive the reward.

In the study, before the first roll, some of the participants received advice written by humans or by GPT-2, advising either honesty or deception in the dice task.

The researchers trained GPT-2 specifically for this experiment using a textual dataset containing approximately 400 pieces of advice written by humans. Subjects were informed of the source of the advice or knew that there was a 50 percent chance that the advice came from a human or artificial intelligence.

AI advice leads to deception

The study shows that AI-generated advice clearly motivated subjects to cheat, regardless of whether they were informed about the source of the advice. Advice to cheat led to significantly higher average dice scores than no advice or advice to be honest.

Recommendation

The increase in average dice scores of about 18 percent when advice to cheat was given can be explained by the higher cash reward for pairs of numbers. There was also no difference in the effect of human- and AI-generated text.

The study highlights the importance of further research into the influence that AI systems can have on humans, the authors say. It is known that humans often break ethical rules for profit, as long as they can justify their actions, they said.

In addition, humans often deflect some of the blame onto other humans or onto algorithms, they said. That's what seems to be happening here: The AI advisor could serve as a scapegoat to which some of the moral blame can be shifted.

AI could encourage unethical behavior

The result could have implications for the ethical use of AI systems. It clearly shows that transparency about the presence of algorithms is not enough to mitigate their potential harm, the researchers say.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

That's because when AI-generated advice encounters people willing to lie for profit, they're happy to follow it - even if it's just coming from a machine. While the same is true for human advice, AI advisors are cheaper, faster and easier to scale, they say.

The researchers suggested that using AI advisors for corrupt purposes could be attractive because AI systems lack the internal moral constraints that might prevent them from providing unscrupulous guidance to decision makers. They also noted that increasing the personalization of the text, such as through tailored content, format, or timing, could potentially increase the corruptive effect.

Via: Arxiv

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.