Pentagon pushes AI companies to deploy unrestricted models on classified military networks
Key Points
- The Pentagon is pressuring OpenAI, Anthropic, Google, and xAI to make their AI available on classified military networks without standard safety restrictions.
- Anthropic is resisting, refusing to permit its AI for autonomous weapons control and domestic surveillance.
- AI researchers warn hallucinations could have deadly consequences in military settings, but Pentagon officials argue company-imposed safeguards are unnecessary.
The Pentagon is pressing leading AI companies including OpenAI, Anthropic, Google, and xAI to make their AI tools available on classified military networks - without the usual usage restrictions.
That's according to Reuters, which cites multiple sources familiar with the matter. At a White House meeting on Tuesday, Pentagon technology chief Emil Michael told tech executives that the military wants AI models available across all classification levels. Classified networks are used for highly sensitive tasks like mission planning and weapons targeting.
Earlier this week, OpenAI signed an agreement for the unclassified network genai.mil, which serves more than three million Department of Defense employees. Many of the usual usage restrictions were lifted as part of the deal, though some safeguards remain in place. Google and xAI have struck similar agreements. According to OpenAI, expanding to classified networks would require a separate agreement.
Anthropic resists dropping safety restrictions
Negotiations with Anthropic are proving significantly more difficult. The company is the only one whose AI chatbot Claude is already available on classified networks through third-party providers, but it refuses to allow its technology to be used for autonomous weapons control and domestic surveillance. At the same time, Anthropic has said it wants to help the US maintain its lead in AI.
AI researchers warn about the risks involved: AI chatbots still hallucinate, and in sensitive military environments, those errors could have deadly consequences. AI companies try to limit risks through built-in safeguards and usage policies. The Pentagon sees it differently - according to Reuters, military officials are frustrated by these company-imposed restrictions. From their perspective, the military should be free to use commercial AI tools however it sees fit, as long as the use complies with US law. Additional rules set by the companies themselves, they argue, are unnecessary.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now