Content
summary Summary

Researchers argue that Big Tech should not control generative AI in healthcare.

Ad

In an article published in Nature, researchers express concern that big tech companies could dominate the development and use of generative AI in healthcare.

They argue that medical professionals should drive development and deployment to protect people's privacy and safety - not commercial interests.

Big Tech's entry into healthcare AI

Tech giants such as Google and Microsoft are making great strides in generative AI for healthcare.

Ad
Ad

Google recently unveiled MedLM, a set of specialized generative healthcare AI models available to customers in the US through its Vertex AI platform. The models are based on Med-PaLM 2, the second iteration of Google's large-scale, specialized medical language models that can respond at a specialist level.

Microsoft recently introduced Medprompt, a new prompting strategy that enables GPT-4 to achieve top scores on medical question benchmarks, outperforming specialized models like MedPaLM-2. Earlier this year, Microsoft highlighted the potential of GPT-4 for medical tasks.

Despite these advances, the researchers argue that the rush to adopt proprietary large language models (LLMs) - such as those used by ChatGPT - risks ceding control of medicine to obscure commercial interests. They point to several potential pitfalls.

Healthcare could quickly become dependent on LLMs, which are difficult to evaluate and could be changed without notice or even discontinued if the service is deemed no longer viable. This could undermine patient care, privacy, and safety, the researchers write.

In addition, LLMs often generate hallucinations and convincingly false results. When circumstances change, such as the emergence of a new virus, it is unclear how a model's knowledge base can be updated without costly retraining.

Recommendation

In addition, using medical records to train the model would pose privacy risks. The model could reconstruct sensitive information and share it upon request. This is a particular risk for data about people with rare diseases or conditions, according to the researchers.

Finally, LLMs based on large amounts of data from the Internet could reinforce biases related to gender, race, disability, and socioeconomic status. Even if outsiders had access to the underlying models, it is not clear how best to assess the safety and accuracy of LLMs.

Open-source LLMs could lead to more collaboration and transparency

The researchers suggest a more transparent and comprehensive approach. They recommend that healthcare institutions, academic researchers, physicians, patients, and even technology companies around the world work together to develop open-source LLMs for healthcare.

This consortium could develop an open-source foundational model based on publicly available data. Consortium members could then share knowledge and best practices to refine the model using patient-level data that may exist at a particular private institution.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Such an open, consortium-led approach would have several advantages over the development of proprietary LLMs for medicine. It would help ensure their reliability and robustness, enable shared and transparent evaluation of models, and facilitate compliance with privacy and other requirements.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Researchers warn that the dominance of big tech companies such as Google and Microsoft in the development and application of generative AI in healthcare could jeopardize patient privacy and safety.
  • They argue that medical professionals and a consortium-led approach using open-source large language models (LLMs) should drive development to ensure transparency and privacy.
  • Such an approach would foster collaboration among healthcare institutions, academic researchers, physicians, patients, and technology companies worldwide, and allow for shared evaluation of the models.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.