After leaks and massive criticism, OpenAI adds safeguard clauses to Pentagon contract
Key Points
- OpenAI is adding safeguard clauses to its contract with the US Department of Defense after the deal drew sharp criticism from both inside and outside the company.
- The key addition: the AI system must not be used to surveil US citizens, and intelligence agencies like the NSA are explicitly barred from using OpenAI's services, according to the Department of Defense.
- CEO Sam Altman acknowledged that the original announcement was rushed and poorly communicated. OpenAI had initially agreed that its AI could be used for "all lawful use" without specifying limitations.
After OpenAI stepped in to take over Anthropic's Pentagon deal, the ChatGPT maker faced backlash from both inside and outside the company.
OpenAI employees publicly questioned the deal, and some ChatGPT users canceled their accounts and switched to Anthropic's Claude, pushing it to number one in the Apple App Store. That was enough to get OpenAI's attention.
Now the company is adding new clauses to its contract with the US Department of Defense (DoD). CEO Sam Altman posted the details on X, sharing a message originally written for internal staff.
The biggest addition: the AI system cannot be used intentionally to surveil US citizens—not even indirectly as an analysis tool after purchasing commercial personal data, as implied by the following added language.
Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.
For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.
via X
The DoD also confirms that intelligence agencies like the NSA aren't allowed to use OpenAI's services. That would require a separate contract.
Altman stresses that the technology isn't ready for many use cases and that OpenAI still doesn't fully understand what safeguards are needed for secure deployment. According to Altman, these issues will be worked out step by step with the DoD, including technical protections. He also says OpenAI intends to operate through democratic processes and would refuse unconstitutional orders.
Altman admits the original Friday announcement was rushed and poorly communicated. He also repeats that Anthropic shouldn't be classified as a Security Critical Provider (SCR) and should get the same contract terms.
OpenAI researcher wants democratic guardrails before AI reaches intelligence agencies
OpenAI researcher Noam Brown, the mind behind last year's reasoning model breakthrough, publicly backed the revised terms on X. He points out that the original contract language left too many open questions, especially around new surveillance capabilities that AI makes possible.
"The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security," Brown writes.
These gaps need to be closed through democratic processes before intelligence agencies get access, he argues. Brown warns about a slow normalization effect where democratic oversight gets sidelined for major policy decisions.
He also says he plans to get more personally involved in AI policy at OpenAI. Given how fast the research is moving, he believes it's critical that researchers have a voice when policy decisions are being made.
The fact that OpenAI is walking things back this aggressively likely comes down to a string of leaks over the weekend. They paint a picture of an AI company that actively pushed the Pentagon deal forward while Anthropic was still at the negotiating table.
According to the New York Times, Altman reached out to Pentagon technology chief Emil Michael just one day after the Pentagon gave Anthropic its ultimatum. Within 24 hours, the two sides had a framework in place. OpenAI agreed that its AI could be used for "all lawful use," the exact wording Anthropic had been trying to avoid.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now