Former OpenAI policy chief launches institute for independent AI safety audits
Key Points
- Miles Brundage, former OpenAI policy research lead, has launched AVERI, a nonprofit advocating for independent safety audits of frontier AI models instead of industry self-assessment.
- The institute raised $7.5 million, including from AI company employees who Brundage says "know where the bodies are buried" and want more accountability.
- Without government mandates, market pressure from enterprise customers and insurers requiring audits could push AI labs toward external oversight.
Miles Brundage, who led policy research at OpenAI for seven years, is calling for external audits of leading AI models through his new institute AVERI. The industry should no longer be allowed to grade its own homework.
Miles Brundage has founded the AI Verification and Evaluation Research Institute (AVERI), a nonprofit organization advocating for independent safety audits of frontier AI models. Brundage left OpenAI in October 2024, where he served as an advisor on how the company should prepare for the advent of artificial general intelligence.
"One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own," Brundage told Fortune. "There's no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules."
The leading AI labs do conduct safety testing and publish technical reports, sometimes with external red team organizations. But consumers and governments currently have to simply trust what the labs say.
Insider donations hint at industry unease
AVERI has raised $7.5 million so far and is aiming for $13 million to cover 14 staff members. Funders include former Y Combinator president Geoff Ralston and the AI Underwriting Company. Notably, the institute has also received donations from employees at leading AI companies. "These are people who know where the bodies are buried," Brundage said, "and who would like to see more accountability."
Alongside the launch, Brundage and more than 30 AI safety researchers and governance experts published a research paper outlining a detailed framework for independent audits. The paper proposes "AI Assurance Levels" - Level 1 roughly matches the current state with limited third-party testing and restricted model access, while Level 4 would provide "treaty-grade" assurance robust enough to serve as a foundation for international agreements between nations.
Insurers and investors could force the issue
Even without government mandates, several market mechanisms could push AI companies toward independent audits, Brundage believes. Large enterprises deploying AI models for critical business processes might require audits as a condition of purchase to protect themselves against hidden risks.
Insurance companies are likely to play a particularly important role, according to Brundage. Business continuity insurers could make independent evaluations a prerequisite before writing policies for companies that rely heavily on AI. Insurers working directly with AI companies like OpenAI, Anthropic, or Google could also demand audits. "Insurance moves fast," Brundage said.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now