Ad
Skip to content

FDA is using an AI system that staff say frequently invents or misrepresents drug research

The US Food and Drug Administration is relying on Elsa, a generative AI system, to help evaluate new drugs - even though, according to insiders, it regularly fabricates studies.

"Anything that you don’t have time to double-check is unreliable. It hallucinates confidently," one current FDA employee told CNN, describing the AI system known as Elsa (Efficient Language System for Analysis), which is supposed to speed up drug approvals. Several staff members reported that Elsa frequently invents studies or misrepresents research data - a well-known issue with large language models. The FDA's head of AI, Jeremy Walsh, acknowledged the problem: "Elsa is no different from lots of [large language models] and generative AI. They could potentially hallucinate."

Despite these risks, Elsa is already being used to review clinical protocols and assess risks during inspections. The system operates in a regulatory gray area, since there are currently no binding rules for AI in US healthcare.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: CNN