Content
summary Summary

According to a recent study, ChatGPT surpasses the quality and empathy of physicians when responding to online queries. However, there are some caveats.

Ad

A recent study published in JAMA Internal Medicine reveals that ChatGPT surpasses physicians in terms of quality and empathy when responding to online queries. The study evaluated ChatGPT's performance compared to physicians in answering patient questions from Reddit's r/AskDocs forum.

The cross-sectional study involved 195 randomly selected questions and found that chatbot responses were preferred over physician responses. ChatGPT received significantly higher ratings for both quality and empathy.

John W. Ayers et al.
Image: John W. Ayers, PhD, MA; Adam Poliak, PhD; Mark Dredze, PhD; et al.

Notably, the study used GPT 3.5, which is an older version, suggesting that the latest GPT-4 upgrade could provide even better results.

Ad
Ad

Chatbot-assisted doctors

The researchers write that AI assistants could help draft responses to patient questions, potentially benefiting both clinicians and patients. Further exploration of this technology in clinical settings is needed, including using chatbots to draft responses for doctors to edit. Randomized trials could assess the potential of AI assistants to improve responses, reduce clinician burnout, and improve patient outcomes.

Prompt and empathetic responses to patient queries could reduce unnecessary clinical visits and free up resources. Messaging is also a critical resource for promoting patient equity, benefiting individuals with mobility limitations, irregular work schedules, or fear of medical bills.

Despite the promising results, the study has limitations. The use of online forum questions may not reflect typical patient-physician interactions, and the ability of the chatbot to provide personalized details from electronic health records was not assessed.

Additionally, the study didn't evaluate the chatbot's responses for accuracy or fabricated information, which is a major concern with AI-generated medical advice. Sometimes it's harder to catch a mistake than to avoid making one, even when humans are in the loop for final review. But it's important to remember that humans make mistakes, too.

The authors conclude that more research is needed to determine the potential impact of AI assistants in clinical settings. They emphasize addressing ethical concerns, including the need for human review of AI-generated content for accuracy and potential false or fabricated information.

Recommendation

It's worth noting that ChatGPT is not specifically optimized for medical tasks. Google is developing Med-PaLM 2, a large language model fine-tuned for medical purposes. Google claims it can pass medical exams and plans to start trials with professionals.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A recent study shows that ChatGPT significantly outperforms physicians in quality and empathy when answering online questions about medical topics.
  • AI assistants can benefit clinicians and patients by drafting responses, potentially reducing burnout and improving outcomes, the researchers write.
  • Despite promising results, ethical concerns and human review of AI-generated content for accuracy and potential misinformation are critical in healthcare applications; further research is needed.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.