AI in practice

Big Tech's race to control generative AI in healthcare raises ethical concerns

Matthias Bastian
A 16:9 hand-drawn illustration. The image features a businessman, clad in a sharp suit, carrying a suitcase. However, this businessman is ingeniously disguised as a doctor, complete with a white coat, stethoscope around his neck, and a confident yet slightly mysterious demeanor. To add a twist, the illustration has a very subtle glitch effect, suggesting a digital or surreal distortion in the businessman-doctor's appearance. This effect should be minimal but noticeable, creating an intriguing and thought-provoking visual.

DALL-E 3 prompted by THE DECODER

Researchers argue that Big Tech should not control generative AI in healthcare.

In an article published in Nature, researchers express concern that big tech companies could dominate the development and use of generative AI in healthcare.

They argue that medical professionals should drive development and deployment to protect people's privacy and safety - not commercial interests.

Big Tech's entry into healthcare AI

Tech giants such as Google and Microsoft are making great strides in generative AI for healthcare.

Google recently unveiled MedLM, a set of specialized generative healthcare AI models available to customers in the US through its Vertex AI platform. The models are based on Med-PaLM 2, the second iteration of Google's large-scale, specialized medical language models that can respond at a specialist level.

Microsoft recently introduced Medprompt, a new prompting strategy that enables GPT-4 to achieve top scores on medical question benchmarks, outperforming specialized models like MedPaLM-2. Earlier this year, Microsoft highlighted the potential of GPT-4 for medical tasks.

Despite these advances, the researchers argue that the rush to adopt proprietary large language models (LLMs) - such as those used by ChatGPT - risks ceding control of medicine to obscure commercial interests. They point to several potential pitfalls.

Healthcare could quickly become dependent on LLMs, which are difficult to evaluate and could be changed without notice or even discontinued if the service is deemed no longer viable. This could undermine patient care, privacy, and safety, the researchers write.

In addition, LLMs often generate hallucinations and convincingly false results. When circumstances change, such as the emergence of a new virus, it is unclear how a model's knowledge base can be updated without costly retraining.

In addition, using medical records to train the model would pose privacy risks. The model could reconstruct sensitive information and share it upon request. This is a particular risk for data about people with rare diseases or conditions, according to the researchers.

Finally, LLMs based on large amounts of data from the Internet could reinforce biases related to gender, race, disability, and socioeconomic status. Even if outsiders had access to the underlying models, it is not clear how best to assess the safety and accuracy of LLMs.

Open-source LLMs could lead to more collaboration and transparency

The researchers suggest a more transparent and comprehensive approach. They recommend that healthcare institutions, academic researchers, physicians, patients, and even technology companies around the world work together to develop open-source LLMs for healthcare.

This consortium could develop an open-source foundational model based on publicly available data. Consortium members could then share knowledge and best practices to refine the model using patient-level data that may exist at a particular private institution.

Such an open, consortium-led approach would have several advantages over the development of proprietary LLMs for medicine. It would help ensure their reliability and robustness, enable shared and transparent evaluation of models, and facilitate compliance with privacy and other requirements.

Sources: