For Eliezer Yudkowsky, the only meaningful measure of political success in AI governance would be an international treaty that legally requires shutting systems down.
"If we get an effective international treaty shutting A.I. down, and the book had something to do with it, I’ll call the book a success. Anything other than that is a sad little consolation prize on the way to death," he told the New York Times:
This position leaves no middle ground. For Yudkowsky, everything besides total prohibition — from red-teaming to interpretability research or model evaluations — is meaningless. He sees talk of "differentiated risk regulations" or "safe labs" as distractions from the only viable policy: global, binding shutdowns. "Among the crazed mad scientists driving headlong toward disaster, every last one of which should be shut down, OpenAI’s management is noticeably worse than the pack, and some of Anthropic’s employees are noticeably better than the pack," he told the Times. "None of this makes a difference, and all of them should be treated the same way by the law."
A longtime voice of alarm
Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a nonprofit in Berkeley that has been studying the risks of advanced AI since the early 2000s. He is also one of the central figures in the Rationalist community, whose discussions of cognitive rationality, decision theory, and existential risk were shaped heavily by his writings and by the site he founded, LessWrong.
He rose to prominence with early warnings about uncontrollable superintelligence, long before the latest wave of AI hype. Those radical calls — from pausing AI training runs to authorizing military strikes on rogue data centers — have made him one of the most controversial figures in today’s AI debates.
Why superintelligence means "everyone dies"
Yudkowsky’s uncompromising stance rests on a simple assumption: any artificial superintelligence built with current methods would end humanity. "If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die," he writes in his new book If Anyone Builds It, Everyone Dies, co-authored with MIRI president Nate Soares.
He compares the inevitability of this outcome to the laws of physics: "Imagine going up to a physicist and saying, ‘Have any of the recent discoveries in physics changed your mind about rocks falling off cliffs?’"
In his view, progress in interpretability, alignment strategies, or corporate ethics doesn’t change the math. Once systems cross a threshold of capability, no amount of goodwill or safety tooling matters.
No timelines, no consolation prizes
Yudkowsky also rejects the framework many policymakers use — attaching governance measures to timelines or probabilistic forecasts. "What is this obsession with timelines?" he asked. Delaying regulation because a breakthrough might be ten, twenty, or fifty years away, he argues, is reckless. If the risk exists at all, action must come immediately.
He also dismisses any reliance on company culture or corporate ethics. "Good" teams might be marginally less dangerous, he admits, but structurally the outcome is identical. Once capabilities advance far enough, governance short of shutdown cannot prevent catastrophe.
That, for him, becomes a test of seriousness. Any actor who claims to care about existential risk but avoids endorsing shutdown is, in his eyes, being inconsistent.