Content
summary Summary

Prof. Dr. Felix Nensa is a radiologist and professor at the Institute for Artificial Intelligence in Medicine at the University Hospital Essen. He focuses on the potential of AI in medicine.

Large Language Models (LLMs) could help process medical information by translating doctors' letters into understandable everyday language or generating structured and machine-readable data from recorded doctor-patient conversations. However, such models would face regulatory privacy hurdles.

Despite privacy concerns, LLMs have the potential to alleviate staffing shortages in the healthcare system and make medical care more person-centered by freeing medical staff from administrative tasks, says Prof. Felix Nensa.

Prof. Dr. Felix Nensa, radiologist and professor at the Institute for Artificial Intelligence in Medicine at the University Hospital Essen, Germany. | Image: UK Essen

What applications do you see for ChatGPT-like systems? For specialists, for patients, for both?

There is currently a discussion about using ChatGPT to generate medical letters. I am not convinced that makes sense. We don't need more relatively unstructured textual information.

Instead, we need to prepare medical information so that it can be understood and used by the recipient as quickly as possible. This is where Large Language Models (LLMs) can be very helpful. The technology is excellent at translating spoken medicine and interpersonal interaction into structured data.

That is, I see a useful application in translating doctors' letters into language that laypeople can understand and, conversely, generating or extracting structured and machine-readable data from speech information, such as recorded doctor-patient conversations, which is what medical professionals and specialists need and want. It is important that the "translations" for patients are medically verified.

Of course, there is nothing wrong with using an LLM to translate structured information back into a classic physician's letter, if the recipient of the information wants this, but this should not be the main focus. In everyday clinical practice, a compressed tabular representation is likely to be preferred anyway, and this could also be generated automatically.

What is missing in ChatGPT so that it could be used in these applications?

For many applications, we are still in the development and testing phase. So there is a lack of training and experience, but also a lack of transparency and explainability of the models. We are already having doctors' letters rewritten as described, and we are working on enabling doctors and nurses to use LLMs to search for allergies and diseases in our electronic patient record in a matter of seconds.

But this is about using proprietary data. We have the necessary highly secure IT infrastructure. In Essen, we now have access to more than 1.5 billion data packages. But the more data that can be used to train AI, the better. There is still a lack of common standards and data interoperability.

What barriers do you see to LLMs entering medicine?

As is often the case when data is used for medical purposes, the use of such technologies currently faces regulatory issues. In other words, privacy is an obstacle. Sensitive patient data cannot simply be sent to large cloud servers from outside companies like OpenAI.

Don't get me wrong, privacy is hugely important and I don't think the use of ChatGPT in this context is justifiable. But privacy should not stand in the way of medical progress, and certainly not in the way of a promising therapy.

It's like Paracelsus said: "The dose makes the poison."

I have the impression that Germany and the European Union lack the courage to make concessions when it comes to data protection. At the same time, many patients don't care if it helps them. They are happy to hand over their data for further research if the ethical use of the data is guaranteed and ultimately progress can be made for all of us.

International technology companies are currently setting technological standards. In the future, European companies will have to rely on them if there are no European alternatives.

This is worrying because we in Germany and the EU are based on different values and have different standards for data protection. So we need to be more flexible and pragmatic when it comes to data protection, especially if we want to make our European values on ethics and data protection future-proof.

How can we ensure that these technologies are actually benefiting patients and not being used by private clinics to compensate for staff shortages or even exacerbate them to increase profits?

Of course, it depends on the right framework. University hospitals, for example, are in a position to take on trust center functions. It is also true that you can never prevent technologies from being used for criminal purposes.

But when I weigh the potential risks of AI against the expected benefits, I see the pendulum swinging strongly in the direction of technology. That's because the benefits of AI are many, including alleviating the current shortage of medical personnel. We will no longer be able to solve this shortage simply because of demographic change.

But we can mitigate it with the help of modern technologies like ChatGPT. So if a hospital, whether it's private or not, can use AI to mitigate the ubiquitous staffing shortage, that's a good thing in the first place, and it will benefit patients in particular. If they only have to wait a week for an MRI instead of five weeks, for example, that's a good thing.

Contrary to the widespread belief that AI solutions will contribute to the dehumanization of medicine, I see them as an opportunity for tomorrow's medicine to become more human again. Because AI can free doctors and nurses from administrative tasks, giving them more time for interpersonal contact.

But as a society, we really need to make sure that the time gained by staff is invested in time with patients, not just in maximizing profits.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Large Language Models (LLMs) could help process medical information by translating doctors' letters into understandable everyday language or generating structured and machine-readable data from recorded doctor-patient conversations.
  • According to Prof. Dr. Felix Nensa, a radiologist and professor at the Institute for Artificial Intelligence in Medicine at the University Hospital in Essen, Germany, LLMs could alleviate staff shortages in the healthcare sector and make medical care more person-centered by relieving medical staff of administrative tasks.
  • In Germany and the EU, however, such models face regulatory hurdles in the area of data protection, making responsible handling of patient data and the ethical use of AI technologies crucial.
THE DECODER
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.