Ad
Skip to content

Anthropic CEO warns democracies must protect themselves from their own AI

Image description
Sora prompted by THE DECODER

Key Points

  • Anthropic CEO Dario Amodei warns in a new essay that democracies should use AI for national defense but avoid technologies that would make them resemble autocratic regimes, drawing absolute red lines against domestic mass surveillance and propaganda.
  • While Amodei advocates using AI to "disrupt and degrade autocracies from the inside," he sees autonomous weapons and strategic AI decision-making as more complicated, with his main concern being too few "fingers on the button."
  • Critics accuse Anthropic of stoking fears to limit competition, while others point to the company's $200 million Pentagon contract and partnerships with Palantir, which helps ICE track migrants in the US.

Dario Amodei outlines the dangers of powerful AI systems in a new essay. His central demand: Democracies should only use AI in ways that don't make them more like their autocratic adversaries.

Anthropic CEO Dario Amodei has published an extensive essay analyzing the risks of advanced AI systems. The text, titled "The Adolescence of Technology," describes what Amodei calls humanity's "rite of passage." The new text is conceived as a complement to Amodei's earlier essay "Machines of Loving Grace" from October 2024. While that text described the positive possibilities of powerful AI, the new one focuses on the risks.

The central thesis can be summarized in one sentence: Democracies should use AI for national defense - in all ways "except those which would make us more like our autocratic adversaries."

Tools democracies should not use

Amodei identifies four technologies that autocracies could use to oppress their citizens: fully autonomous weapon swarms, AI-powered mass surveillance, personalized propaganda over years, and strategic AI advisors - a kind of "virtual Bismarck."

Ad
DEC_D_Incontent-1

For two of these applications, he draws an absolute line: AI-powered domestic mass surveillance and mass propaganda against one's own population are entirely illegitimate. He acknowledges that mass surveillance is already illegal in the US under the Fourth Amendment. But the rapid progress of AI could create situations that existing legal frameworks are not designed to handle. Amodei therefore advocates new legislation to protect civil liberties or even a constitutional amendment.

Outward, against autocratic adversaries, he considers the same tools legitimate. He explicitly advocates that democracies should use their intelligence services to "disrupt and degrade autocracies from the inside." Democratic governments could use their superior AI to "win the information war" and provide channels of information that autocracies lack the technical ability to block.

With fully autonomous weapons and strategic AI decision-making, he sees the situation as more complicated - these have legitimate uses in defending democracy. His main concern: too few "fingers on the button," such that a handful of people could operate a drone army without needing any other humans to cooperate.

Are we the baddies?

Critics like AI researcher Yann LeCun accuse the company of deliberately stoking fears with exaggerated risk scenarios to push through regulations that primarily disadvantage open AI models and thus limit competition. David Sacks, Donald Trump's AI advisor, also accused Anthropic of fearmongering to influence regulators.

Ad
DEC_D_Incontent-2

Amodei rejects this, emphasizes cooperation with the US government, and recently felt compelled after the criticism to explicitly back President Donald Trump's AI policy. He presents this cooperation as nonpartisan and describes Anthropic as a "policy actor" that represents substantive positions to all political camps.

At the same time, there is a contract with the US Department of Defense worth up to $200 million for developing so-called frontier AI for national security. The language model Claude is also deployed in classified networks through partners like Palantir and Lawrence Livermore National Laboratory. Palantir is used - among others - by US Immigration and Customs Enforcement (ICE) to track down migrants in the United States. This does not fundamentally contradict Amodei's red lines. But in his attempt to save democracy from autocratic foreign adversaries and dangerous AI, his products could help strengthen autocracy at home.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Dario Amodei