Content
summary Summary

Researchers warn against the use of ChatGPT & Co. in psychotherapy - and call for an international research project to make this possible.

Ad

Numerous AI startups and established companies are looking for use cases for large language models and chatbots like ChatGPT. Candidates range from marketing and sales to healthcare and psychotherapy.

But a group of researchers, in a paper titled "Using large language models in psychology," now cautions against using these models in psychology, and especially in psychotherapy. Their ability to generate psychologically useful information is fundamentally limited, says co-author Dora Demszky, a professor of data science in education at the Stanford Graduate School of Education. "They are not capable of showing empathy or human understanding," Demszky said.

ChatGPT doesn't have a Theory of Mind

What the scientist means is that large language models lack a "theory of mind," an understanding of other people's mental states. David Yeager, a professor of psychology at the University of Texas at Austin and also an author of the paper, points out that while the models can generate human-like text, they don't have the depth of understanding of a professional psychologist or a good friend.

Ad
Ad

In their paper, the researchers argue for a partnership between academia and industry on the scale of the Human Genome Project. This partnership should include the development of key data sets, standardized benchmarks, such as for use in psychotherapy, and a shared computing infrastructure to develop psychologically competent LLMs.

Further research could enable transformative deployment

The motivation behind the call is twofold: First, the researchers fear "a world in which the makers of generative AI systems are held liable for causing psychological harm because nobody evaluated these systems’ impact on human thinking or behavior," Yeager said.

Second, they see potential in the models: "We argue that although LLMs have the potential to advance psychological measurement, experimentation and practice, they are not yet ready for many of the most transformative psychological applications — but further research and development may enable such use. "

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The researchers caution against the use of large language models such as GPT-4 in psychology and psychotherapy because of their limited ability to generate psychologically useful information and their lack of empathy and human understanding.
  • They argue for a partnership between academia and industry to develop key datasets, standardized benchmarks, and a common computational infrastructure for developing psychologically useful language models.
  • The team expresses concern about the potential psychological harm that could be caused by these models but also recognizes the potential for transformative psychological applications with further research and development.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.