summary Summary

In a field experiment, researchers use real-time phrasing assistance from a language model to improve the quality of discussion in a chat about gun control in the United States. The experiment supports the researchers' thesis that chatbots could have a positive impact on the culture of debate.

In total, the researchers recruited 1,574 people with different views on US gun control. The participants discussed the issue in an online chat room.

The perfect setting for chaos, anger, and contention? Apparently not when a language model focused on kindness and appreciation supports the discussion.

Two-thirds of AI suggestions accepted

A large language model (GPT-3) reads conversations and suggests alternative wording before the message is sent. The suggestion is intended to make the message more reassuring, validating, or polite.


Subjects can choose to use the suggested wording, change it, or keep their original message. In total, GPT-3 made 2,742 suggestions for improvement, which were accepted two-thirds of the time, or 1,798 times. A conversation lasted an average of twelve messages, and participants were paid.

At certain intervals, GPT-3 intervenes in the conversation, suggesting alternative phrases for the same argument that are more empathetic to the other person. | Image: Argyle et al.

According to the team, the experiment was prompted in particular by the rapid escalation of debates on digital channels and whether AI can help improve the culture of discussion there.

"Such toxicity increases polarization and corrodes the capacity of diverse societies to cooperate in solving social problems," the paper says.

Large language models should promote understanding and respect

The results of the experiment support the team's hypothesis that a language model can improve online discussion culture: According to participating researcher Chris Rytting, a Ph.D. candidate at Brigham Young University, the AI suggestions had "significant effects in terms of decreasing divisiveness and increasing conversation quality, while leaving policy views unchanged."

The researchers measured these improvements using text analysis of conversations before and after an AI suggestion, as well as a pre- and post-chat questionnaire. A follow-up survey nearly three months after the experiment checked to see if the chatbot interventions had a lasting effect on the subjects, and the team found no anomalies.


Political opinions were not affected by the language model, the researchers said. Even if disagreements persist, high-quality political discourse is good for a society's social cohesion and democracy, they said.

"Such dialogue is a necessary, even if insufficient, condition for increasing mutual understanding, compromise, and coalition building."

The next step could be to confirm and deepen these findings in further studies under real-life conditions, such as unpaid and longer conversations among family or friends.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
  • A study examines how large language models in online chats about political issues can improve the culture of debate.
  • About 1,500 people with different views on gun control in the US chatted with each other. The language model suggested more sensitive and friendly language. The actual content of a chat message remained the same.
  • The empathetic wording guidance had a significant positive effect on conversation culture without changing people's political views, the researchers found.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.