AI systems like ChatGPT are taking on roles once reserved for doctors—not because they are more medically capable, but because they are not constrained by time, bureaucracy, or exhaustion.
When journalist Kate Pickert was diagnosed with breast cancer ten years ago, she had numerous questions—about test results, side effects, and new clinical studies. Her oncologist often responded promptly via email, but sometimes Pickert received only an out-of-office reply: on call, traveling, unavailable for days. During a period when every hour felt critical, such delays felt interminable.
Looking back, Pickert, now a journalism professor, wonders what might have been different if tools like ChatGPT had existed at the time—a system available 24/7, able to explain medical terminology, summarize research findings, and do so with a tone of compassion. That question forms the basis of an essay she recently published on Bloomberg.
Her central argument is that artificial intelligence in healthcare has so far been seen primarily as a tool for improving efficiency—supporting diagnosis, automating documentation, and analyzing data. But its most significant benefit may lie elsewhere: in the human connection.
One example is Rachel Stoll, a patient living with the rare Cushing's syndrome. After a frustrating medical encounter, she turned to ChatGPT for the first time—and was surprised by what she found. The system responded not only with accurate medical information but with empathetic phrases like “That must be frustrating” and “I’m sorry to hear that.” No time limit, no digressions, no impatience.
Offering emotionally intelligent digital conversations
The idea that language models can sometimes appear more empathetic than human doctors has been supported by recent research. A study from New York University found that patients rated chatbot responses as more empathetic than those written by physicians. The reasons appear to be structural: time constraints, administrative burdens, and high workloads often prevent empathy from surfacing in daily clinical routines.
These differences aren’t just theoretical. Dr. Jonathan Chen of Stanford University tested ChatGPT with an ethical dilemma from his practice: a patient with dementia could no longer swallow—should a feeding tube be inserted or not? The system’s response was so nuanced and compassionate that, as Chen put it, “Holy crap, this chatbot is providing better counseling than I did in real life.” For him, it became a learning opportunity—a low-risk environment to practice difficult conversations.
Such applications may become increasingly valuable in medical education. Instead of hiring actors to simulate patient interactions, students could use AI-powered simulations to practice engaging with various personality types—from anxious parents to withdrawn cancer patients. Bernard Chang, a medical educator at Harvard, sees this as a way to strengthen the emotional skills of future physicians.
Supporting more human care through machines
The paradox at the center of Pickert’s essay is that more automation could help make healthcare more human. By offloading routine tasks—such as taking notes during consultations or drafting medical reports—AI could give clinicians more time for what patients need most: attention and care.
AI is already used in diagnostics, such as interpreting X-rays. But its potential as a communication tool remains underappreciated—even though this is precisely where improvement is most needed. When a machine gently asks what a human forgot, the problem lies not with the AI—but with the system that made it necessary.