People are reporting cases where ChatGPT has cracked medical mysteries that even specialized teams couldn't solve. The AI analyzes symptoms, lab results, and imaging data, then suggests hypotheses for medical professionals to consider.
One widely shared Reddit post describes how ChatGPT identified a diagnosis that doctors had missed for over a decade. Despite years of MRIs, CT scans, bloodwork, and neurological exams, the cause of the user's symptoms remained a mystery until ChatGPT pointed to a genetic MTHFR mutation.
According to the user, this mutation affects seven to twelve percent of the population. The treating physician was surprised, but ultimately agreed that the diagnosis made sense.
In this case, the user had fed ChatGPT their lab values, symptom descriptions, and medical history. The AI spotted that, even with normal B12 levels, the mutation could lead to poor absorption - a detail that can be addressed with targeted supplements. Within a few months, the symptoms had mostly disappeared.
AI helps to uncover rare and overlooked conditions
The post sparked a wave of similar stories. One user shared that, after 15 years of unexplained nausea, ChatGPT suggested seeing an ENT specialist. A subsequent brain scan finally revealed a balance disorder linked to labyrinthitis and a pinched nerve - a treatable problem.
Other rare diagnoses like eosinophilic fasciitis, hereditary angioedema, and the MTHFR mutation came up repeatedly in the comments. Several users described being dismissed for years or misdiagnosed with psychosomatic disorders. In multiple cases, doctors later confirmed the diagnoses suggested by ChatGPT.
Many users pointed to deeper issues in the healthcare system: time pressure, excessive specialization, and overworked clinics often mean symptoms are viewed in isolation. "Every appointment is like an island," one user wrote. ChatGPT, by contrast, can pull together information from a wide range of sources and analyze it without human bias.
ChatGPT as a diagnostic tool - but with limits
A medical student in the discussion confirmed that doctors are trained to look for the most likely explanation - to "look for horses, not zebras." As a result, rare diseases often go undetected. Keeping up with the latest research is nearly impossible for any one doctor, which is where AI has an edge.
Despite these successes, most users warned against relying on AI alone. ChatGPT can still make errors or draw incorrect conclusions. Many users stressed that they verified the AI's suggestions with their doctors.
AI can't replace medical care, but it can help patients advocate for themselves and bring new ideas to their doctors. As one user put it, ChatGPT gives patients a stronger voice and can help reveal connections that might otherwise be missed.
Important: Be careful about uploading medical documents to ChatGPT. Personally identifiable health information should not be entered into AI systems without caution, since it could be processed outside current data protection regulations like HIPAA or GDPR. Anyone using sensitive data should anonymize it first.
Microsoft's AI outperforms doctors in complex diagnoses
Recent research shows just how far medical AI has come. Microsoft has launched the MAI Diagnostic Orchestrator (MAI-DxO), an AI system that reportedly achieved four times the diagnostic accuracy of experienced general practitioners on complex cases, and did so at a much lower cost. The system uses a new benchmark that simulates the real step-by-step diagnostic process.
MAI-DxO uses a language model that takes on five roles, from generating diagnostic hypotheses to managing costs. Microsoft reports that the system achieved 79.9 percent accuracy at an average cost of $2,397 per case. In comparison, the doctors involved reached just 19.9 percent accuracy at a higher cost. These findings echo the kinds of stories patients have been sharing online.
Studies at NYU suggest that patients often perceive chatbot responses as more empathetic than messages from doctors who are short on time. Tools like ChatGPT are available around the clock and can explain things clearly and reassuringly.
In May, OpenAI reported that its new "o3" model scored twice as high as GPT-4o on the HealthBench test and matched or even outperformed human responses on several measures. OpenAI points out, however, that these results only apply to specific tasks and don't reflect the overall quality of medical care.
So far, AI has played a supporting role for medical teams rather than replacing them. For example, the number of radiologists at the Mayo Clinic has increased from 260 to over 400 since 2016, even as automation has advanced - a 55 percent increase.
The World Health Organization is calling for clear guidelines on transparency, safety, and accountability as AI becomes a bigger part of healthcare. Some researchers argue that oversight shouldn't be left solely to big tech companies, and that regulation is needed to ensure patient safety and focus.