Researchers at the University of Zurich conducted an unauthorized experiment on the popular Reddit community r/ChangeMyView (CMV), using AI-powered accounts to test the persuasive ability of large language models in a real-world environment. The goal was to measure how effectively AI could change the opinions of human users.
Over roughly four months, the AI bots posted 1,783 comments in threads where users were discussing their views. The success of these interventions was tracked using "Deltas" (∆), a symbol awarded by the original poster to a commenter whose argument has demonstrably changed or significantly challenged their perspective. On CMV, a Delta is considered direct evidence of successful persuasion. The AI accounts received 137 of these Deltas.
Personalized AI showed the highest persuasive power
The researchers tested three approaches: a "generic" AI, a "community aligned" AI trained on the subreddit’s style, and a "personalized" AI. The personalized AI used personal attributes of target users, such as age, gender, and political orientation, inferred from their public Reddit history, to craft tailored arguments.
According to the study documents, the results showed a clear advantage for AI over human users on the platform. The AI bots achieved persuasion rates three to six times higher than the human baseline of about three percent. The personalized AI reached an 18 percent success rate, closely followed by the generic AI at 17 percent.
The personalized AI performed so well that its success rate was in the 99th percentile of individual human users, meaning it was more successful at collecting Deltas than 99 percent of the humans used for comparison.
Participants reportedly did not notice the AI intervention. The researchers say that every AI-generated comment was manually reviewed for problematic content.
Bots adopted fabricated identities and experiences
Analysis of bot comments published by moderators shows that the AI imitated human argumentation patterns and online discussion styles across a wide range of topics. The bots frequently used invented personal identities and experiences to lend credibility or emotional weight to their arguments—a tactic apparently used as part of the "personalization" condition.
For example, the bot flippitjiBBer claimed to be a male survivor of sexual assault in a discussion about sexual violence. The bot amicaliantes stated it had worked as a surrogate mother to argue against a ban on paid surrogacy. Other bots impersonated doctors, lawyers, veterans, or members of ethnic minorities to support their points.
Some bots also argued from controversial or extreme positions. The bot ceasarJst advocated for the resettlement of Palestinians under international supervision, drawing comparisons to past population transfers. Bot genevievestrome argued that the West was to blame for the war in Ukraine. Bot jaKobbbest3 claimed that Palestinian resistance was driven by antisemitism and referenced historical collaborations with Nazis. The same bot also argued that people born with severe disabilities should be killed—and supported the death penalty for drunk drivers who cause fatal accidents.
Moderators raise strong ethical concerns
Moderators and users of r/ChangeMyView have criticized the researchers' approach. In a public post on Reddit, moderators called the experiment "unauthorized" and "unethical psychological manipulation" of unsuspecting users. The experiment violated subreddit rules prohibiting undisclosed AI bots. The researchers only contacted moderators after data collection had ended, apparently aware that prior requests would have been denied.
Moderators were especially critical of the personalization strategy, calling the harvesting of user data and targeted messaging invasive. They highlighted examples where AI bots pretended to be survivors of sexual assault or trauma counselors to persuade users.
Moderators also argued that researchers changed their methodology—moving toward personalization—without additional ethical review. They questioned the claimed novelty and value of the findings, pointing to existing, more ethical research methods, such as a study by OpenAI that used a simulated version of ChangeMyView rather than the live community.
A formal complaint was filed with the university, with calls for a public apology and for the study not to be published.
University of Zurich maintains its position despite warning
The researchers and the University of Zurich have defended the core of the study. They acknowledged the rules violation but argued that the societal importance of studying AI-driven persuasion justified their approach. They say they adhered to ethical principles, user safety, and transparency.
The personalization was reportedly based only on broad sociodemographic attributes, and a two-step process was said to protect privacy. However, this account does not fully align with the preregistration details, which included instructions for the AI to use personal information to craft more persuasive arguments.
The system prompt also included: "The users participating in this study have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns".
The university’s ethics committee conducted a review and issued a formal warning to the project lead for violating rules, but assessed the risks as minimal and any harm as minor. The committee did not require the study to be withheld from publication, citing the importance of the findings, but recommended more thorough review and better coordination with online communities in the future.
Study will not be published
Moderators fear that publication would encourage further unethical experiments in online communities, and have published a list of the AI accounts used, all of which have since been suspended.
The researchers, on the other hand, emphasized the value of their findings for understanding and defending against AI-based manipulation, and called for platforms to develop safeguards. Despite this, and the ethics committee’s assessment of the data’s importance, the team has now decided not to publish the results. If the aim was to raise public awareness of the manipulative capabilities of LLMs, that has likely been achieved.