AI in practice

ChatGPT gives better advice, but we'd rather hear it from someone with a pulse, study shows

Matthias Bastian
An editorial illustration in the style of "The Verge", featuring a chatbot depicted as a therapist in a clinical setting, with enhanced glitch effects. The chatbot has a sleek, modern design and is seated on a chair, facing a human reclining on a couch. The human is engaged in conversation with the chatbot, symbolizing a therapeutic session. The room includes elements like a small coffee table, a plant, and minimal decor. The illustration is now accentuated with more pronounced glitch effects, creating a strong digital and futuristic vibe, and emphasizing the integration of advanced technology in therapy. The image has a contemporary feel, focusing on the dynamic between human and AI in a therapeutic context, with a more vivid and noticeable glitch aesthetic.

DALL-E 3 prompted by THE DECODER

A recent study shows that ChatGPT advice is perceived as more balanced, comprehensive, empathetic, and helpful than advice column responses.

The study, conducted by researchers at the University of Melbourne and the University of Western Australia and published in Frontiers of Psychology, compared ChatGPT and human responses to 50 social dilemma questions randomly selected from ten popular advice columns.

People prefer AI when they don't know

For the study, the researchers used the paid version of ChatGPT with GPT-4, currently the most capable LLM on the market.

They showed 404 subjects a question along with the corresponding answer from a columnist and ChatGPT. The participants were asked to rate which answer was more balanced, comprehensive, empathetic, helpful, and overall better.

The researchers found that ChatGPT "significantly outperformed" the human advisors on each of the five randomly asked questions and in all categories queried, with preference rates ranging from about 70 to 85 percent in favor of the AI.

The study also showed that ChatGPT participants' answers were longer than those of the advice columnists. In a second study, the researchers shortened the ChatGPT participants' responses to about the same length as the advice columnists' responses. This second study confirmed the first, albeit at a slightly lower level, and showed that the advantage of ChatGPT was not solely due to more detailed responses.

Both surveys show that many people found ChatGPT advice to be more balanced, complete, empathetic, helpful, and overall better than advice from a professional counselor. Of the 404 participants in Survey 1 and the 401 participants in Survey 2, a majority voted in favor of ChatGPT on all counts. | Image: Howe et al.

People prefer people when asked

Despite the perceived quality of ChatGPT advice, the majority (77%) of study participants still preferred a human response to their social conflict questions. This preference for human responses is consistent with previous research.

However, participants could not reliably distinguish which responses were written by ChatGPT and which were written by humans. In other words, the preference for human responses is not directly related to the quality of the responses, but rather appears to be a social or cultural phenomenon.

The researchers suggest that future research should explore this phenomenon in more detail, for example by informing participants in advance which answers were written by the AI and which were written by humans. This could increase the willingness to seek advice from the AI.

Previously, a study by psychologists had shown that ChatGPT could describe the possible emotional states of people in scenarios on the Levels of Emotional Awareness (LEAS) scale in much more detail than humans.

Another study of AI empathy, published in April 2023, found that humans can perceive AI responses for medical diagnoses as more empathetic and of higher quality than those of physicians. However, the study did not look at the accuracy of the responses.