Dr. Christoph Neuberger is a professor at the Institute for Journalism and Communication Studies at Freie Universität Berlin. He is also the scientific director of the Weizenbaum Institute for the Networked Society and a member of the Learning Systems Platform.
Artificial intelligence (AI) has long remained a promise, an unfulfilled promise. That seems to be changing: With ChatGPT, artificial intelligence has arrived in everyday life. The chatbot's ability to spontaneously, elaborately, and often correctly answer open-ended questions - even in the form of long texts - is astounding and surpasses anything seen before. This is causing quite a stir. It is giving AI development a whole new meaning in the public perception. In many areas people are experimenting with ChatGPT, business, science and politics are exploring the positive and negative possibilities.
It is easy to forget that there is no mind in the machine. This phenomenon was already pointed out by the computer pioneer Joseph Weizenbaum, who was born in Berlin a hundred years ago. He programmed one of the first chatbots in the early 1960s. ELIZA, as it was called, was capable of conducting a therapeutic conversation. From today's perspective, the answers were rather simple. Nevertheless, Weizenbaum observed how test subjects developed an emotional relationship with ELIZA and felt understood. From this and other examples, he concluded that the real danger is not in the computer's capabilities, which Weizenbaum says are quite limited. Rather, it is the false belief in the power of the computer, the voluntary subjugation of humans, that becomes the problem. This is associated with the image of the predictable human being, which is not true: respect, understanding, love, the unconscious, and autonomy cannot be replaced by machines. The computer is a tool that can do certain tasks faster and better - but not more. Therefore, not all tasks should be given to the computer.
The Weizenbaum Institute for the Networked Society in Berlin - founded in 2017 and supported by a consortium of seven universities and research institutions - conducts interdisciplinary research on the digitalization of politics, media, economy and civil society. The researchers are committed to the work of the institute's namesake and focus on the question of self-determination. For example, the public sphere is the central place of collective self-understanding and self-determination in democracy. It is here that controversial issues are to be clarified and political decisions are to be prepared in a diverse, respectful and rational discourse. To this end, journalism selects topics, informs about them, moderates the public discourse and takes a stand in it.
Responsible Use of AI in Journalism
When dealing with large language models such as ChatGPT, the question arises as to how far AI applications can and should go in determining news and opinion. Algorithms are already being used in a variety of ways in newsrooms: they help identify emerging topics and detect fake news; they write weather or stock market reports on their own and generate subtitles for video reports; they personalize the news page and filter reader comments.
These are all useful applications that can be used in ways that not only reduce the workload of newsroom staff, but also improve the quality of media. But: How much control do newsrooms actually have over the outcome, are professional standards being met? Are conflicts being stirred up, or is a distorted view of the world being created? And how much do audiences learn about how AI works? These are all important questions that require special sensitivity in the use and active design of AI. Key factors for the responsible use of AI in journalism include transparent labeling of AI applications, testing of safety and quality standards, promotion of further development and training, critical handling of AI, and reducing fears through better education.
Here, too, the question posed by Joseph Weizenbaum arises: what tasks should not be assigned to computers? So far, there are no chatbots in public discussing with each other - but that could change soon. ChatGPT also sparks the imagination. A democracy simulation that relieves us as citizens of the task of informing, reflecting, discussing, mobilizing, and participating would be the end of self-determination and maturity in democracy. Therefore, moderation in the use of large scale language models is the imperative that should be observed here and in other areas of application.
On the potentials and challenges of AI use in journalism, the Whitepaper of the IT Security, Privacy, Law and Ethics Working Group provides an overview of the potential and challenges of AI use in journalism.