The US Food and Drug Administration is relying on Elsa, a generative AI system, to help evaluate new drugs - even though, according to insiders, it regularly fabricates studies.
"Anything that you don’t have time to double-check is unreliable. It hallucinates confidently," one current FDA employee told CNN, describing the AI system known as Elsa (Efficient Language System for Analysis), which is supposed to speed up drug approvals. Several staff members reported that Elsa frequently invents studies or misrepresents research data - a well-known issue with large language models. The FDA's head of AI, Jeremy Walsh, acknowledged the problem: "Elsa is no different from lots of [large language models] and generative AI. They could potentially hallucinate."
Despite these risks, Elsa is already being used to review clinical protocols and assess risks during inspections. The system operates in a regulatory gray area, since there are currently no binding rules for AI in US healthcare.