Prof. Dr. Peter Dabrock researches and teaches systematic theology (ethics) at the Friedrich-Alexander-Universität Erlangen-Nürnberg. He is a member of the Learning Systems Platform. In his essay, he argues for a pragmatic approach to large language models in which we put the familiar to the test.
Large language models like ChatGPT are celebrated as technical breakthroughs in AI – their impact on our society is sometimes discussed with concern, sometimes demonized. Life is rarely black and white, but mostly gray on gray. We need to use criteria and a participatory approach to explore the corridor of responsible use of the new technology.
The use of language models raises several ethical questions:
- Do the systems cause unacceptable harm to (all or certain groups of) people?
- Are the harms permanent, irreversible, very deep, or mild? Ideational or material?
- Are the language models problematic quasi-independently of their particular use?
- Or are dangerous consequences to be considered only in certain contexts of use, such as when a medical diagnosis is made automatically?
The ethical evaluation of the new language models, especially ChatGPT, depends on how one evaluates the technical development of the language models as well as the depth of intervention of different applications.
In addition, there is always the question of the technology’s potential to address social problems and the assessment of its impact on human self-understanding: Can or should technology be a solution to social problems, or is it an exacerbation of them, and if so, to what extent?
Non-discriminatory language models?
These fundamental ethical questions must be considered for the responsible design of language models. In the case of ChatGPT and related solutions, as with AI systems in general, the expectation of technical robustness of a system must be taken into account, and above all, so-called biases must be critically considered: Biased attitudes can be adopted and even reinforced in the underlying data during programming, training, or use of a language model. These biases must be minimized as much as possible.
But make no mistake: Biases cannot be eliminated because they are also expressions of attitudes. And these should not be completely erased.
However, they must be critically examined to see if and how they are compatible with very basic ethical and legal norms, such as human dignity and human rights, but also – at least as desired in large parts of many cultures – with diversity, and do not legitimize or promote stigmatization and discrimination.
One of the greatest challenges ahead is how to make this possible, both technically and organizationally. Language models will also hold up a mirror to society and – as is already the case with social media – can distort, but also expose and reinforce social fractures and divisions.
If we are going to talk about disruption, one of these potentials lies in the increased use of language models that can be fed with data in a much more intensive way than current models in order to combine sound knowledge.
Even if they are self-learning, using only a neural network, the effect may be so substantial that the texts generated will simulate real human activity. Thus, they are likely to pass the usual forms of the Turing Test. There will be libraries of answers about the implications of this for humans, machines, and their interaction.
The end of creative writing?
One effect that should be carefully monitored is the degradation of the basic cultural technique of individual writing. Why should this be of anthropological and ethical concern?
Recently, it has been argued that the formation of the individual subject and the emergence of Romantic epistolary literature have a constitutive and reciprocal relationship. This does not mean that the inevitable abolition of the survey essay or term paper, designed to document basic undergraduate knowledge and easily produced with ChatGPT, must simultaneously herald the end of the modern subject.
What is clear, however, is that independent creative writing must be practiced and internalized differently – and this is of considerable ethical relevance if the formation of a self-confident personality is crucial to our complex society.
Moreover, as a society, we must learn to deal with the expected flood of texts generated by language models. This is not just a matter of personal time management.
Rather, it threatens to create a new form of social inequality – namely, when the better off can be inspired by texts still written by humans, while the less educated and financially weaker have to make do with the literary crumbs of ChatGPT.
Technologically disruptive or socially divisive?
A technological disruption by ChatGPT does not automatically mean an impending social divide. But it can only be avoided if we quickly rethink the familiar – especially in education – and adapt to the new possibilities.
We are not only responsible for what we do, but also for what we do not do: large language models should not be demonized or banned.
Rather, we need to watch them evolve with an open mind and, as individuals and as a society, boldly shape, promote, and challenge them – and, if possible, bring everyone along to avoid unwarranted inequalities. This is how ChatGPT can be held accountable.