ChatGPT-generated content is finding its way onto Twitter. Are we about to see the wave of manipulative, democracy-threatening bots that have been predicted for years?
2016 was the year of the Brexit referendum and the election of Donald Trump as US president. These events sparked a heated debate about the impact of social media bots on the democratic process. Were there people using this technology to manipulate the masses?
There is no doubt that social media played and continues to play a central role in these and other events. But some researchers suspected that state and other actors were using bots to automate opinion-making - and that it was effective.
Depending on the report, bots were said to number in the tens of thousands, hundreds of thousands or even millions, extending their reach on Twitter and other platforms. The debate continues to flare up, most recently when Elon Musk spoke of a massive bot problem on the platform prior to his acquisition of Twitter.
The role of social bots in democratic processes is not scientifically proven
While the term "bot" is mostly used in the US, in German-speaking countries the term "social bots" is more commonly used - whether due to a desire for precise terminology or "German Angst".
Researchers such as the Berlin data journalist Michael Kreil or Prof. Dr.-Ing. Florian Gallwitz from the TH Nuremberg have been researching social media bots since 2016.
Kreil's conclusion on the Trump case: "Social bots influenced the election. Sounds plausible? Yes, it does. Is it scientifically proven? Not a bit." The researcher speaks of an "army that never existed".
The two researchers also criticize tools designed to detect bots: These are inaccurate, they say, and often classify people as bots. This in turn sometimes leads to a shift in terminology: bots are no longer just automated, possibly AI-powered systems. It is enough for a human with a few followers to be part of a targeted campaign, i.e. to show "bot-like" behavior, in order to be classified as a bot.
"The countless media reports in recent years about human-like political social bots, which are said to have the ability to automatically generate texts, are pure fairy tales. They are based on methodologically flawed research that violates the most basic scientific standards," says Gallwitz. According to the researcher, "ordinary human Twitter users have repeatedly been falsely labeled as 'social bots' en masse".
Social bots, then, were dystopia rather than reality, as well as an attempt to explain supposedly inexplicable events like Brexit.
Is ChatGPT turning dystopia into reality?
Six years later, with OpenAI's ChatGPT, a technology has entered the public domain that requires us to ask new questions about what social bots can do.
That's because the quality of the chatbot far exceeds that of other available systems. The texts generated by ChatGPT are hardly, if at all, recognizable as artificial products. In fact, there are already examples of ChatGPT-generated Twitter posts.
So are we in for a change?
"With ChatGPT, and to a limited extent with its basic model GPT-3, a technology is now available for the first time with which it would be possible to automatically generate freely formulated tweets on current topics or even largely meaningful replies to tweets from other accounts," says Gallwitz about OpenAI's chatbot.
With a suitable combination of OpenAI's and Twitter's programming interfaces, a bot could be realized that automatically joins conversations or tweets current news with a short summary, says the professor of media informatics.
Is ChatGPT too boring for Twitter?
However, Gallwitz is skeptical that a ChatGPT bot could attract attention on Twitter.
"Reach on Twitter is driven primarily by wit and originality, and often by provocation, emotionality and novelty. ChatGPT, on the other hand, has been trained to produce predictable sequences of words that rarely contain surprises or even new thoughts, and are often wordy and long-winded," says Gallwitz. "This is the flat opposite of what promises success on a medium like Twitter."
Besides, he says, there are tools that can recognize ChatGPT text quite reliably. "In this respect, I don't expect ChatGPT or similar tools to form the basis of political manipulation attempts on Twitter in the foreseeable future."
If Gallwitz has his way, the dystopia will remain just that: a threat whose realization is still in question.
In fact, AI systems like GPT-3 could play a very different role in information warfare: in one study, GPT-3 reproduced responses that matched human response patterns along fine-grained demographic axes. The researchers see large-scale language models as novel and powerful tools for social and political science.
But they also warn of potential misuse. With some methodological advances, the models could be used to test specific subpopulations for their susceptibility to disinformation, manipulation or fraud.