Content
summary Summary

A mother struggled for three years to get a diagnosis for her son's chronic pain. ChatGPT then reportedly provided the correct diagnosis.

Ad

According to the mother, she visited 17 specialists over the course of three years, all of whom looked for a possible cause for the boy's ailments in their areas of expertise. But no doctor or physician could alleviate her son's chronic pain, the mother, who wishes to remain anonymous, told Today.com.

She then entered all of her son's symptoms and MRI data into ChatGPT. OpenAI's AI system spit out a diagnosis that medical professionals had not previously made: tethered cord syndrome.

This is a childhood condition in which the spinal cord becomes tethered to its sheaths or surrounding tissue. The resulting traction on nerve fibers can cause headaches and other symptoms. A neurosurgeon confirmed the diagnosis and performed surgery on the boy, who is still recovering.

Ad
Ad

In a similar story, GPT-4 diagnosed a rare canine disease based on blood test results. A veterinary clinic failed to make a diagnosis, but a second diagnosis from another clinic confirmed GPT-4's diagnosis.

AI chatbots help with diagnosis

Such anecdotes illustrate the potential of large language models to answer expert-level medical questions, which has already been demonstrated in studies. GPT-4 is at the forefront of this field. With Med-PaLM, Google aims to commercialize the field of specialized language models for medicine.

Compared to traditional Internet searches, LLMs have a potentially broader expertise that can be tapped in a highly focused way. Research is faster. However, there is a risk of hallucinations, i.e. wrong answers, which can have particularly serious consequences in the health context.

But as the two examples above show, people make mistakes. The discussion about the use of medical chatbots could be similar to the one about self-driving cars: Is it enough for the systems to perform reliably at or slightly above human levels, or do they need to be flawless? Who will be held responsible if an error happens?

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A mother entered her son's symptoms and MRI data into ChatGPT, and the AI system provided a diagnosis of Tethered Cord Syndrome, which was confirmed by a neurosurgeon after 17 doctors were unable to find the cause of the boy's chronic pain.
  • Research shows that large language models (LLMs) such as GPT-4 have the potential to answer expert-level medical questions and provide faster results than traditional Internet searches, but there is a risk of providing false information or "hallucinations" in a health context
  • The debate around medical chatbots focuses on the reliability of AI systems - should they be flawless, or is it enough for them to be as reliable or better than human experts? There are also questions about where responsibility lies if errors happen.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.