OpenAI has signed its first official contract with the US Department of Defense, agreeing to provide and develop AI technologies for $200 million. The one-year deal, focused mainly on the Washington, D.C. area, marks the company’s debut as a direct Pentagon contractor.
Under the agreement, OpenAI will help the Defense Department with military healthcare, program data analysis, and proactive cyber defense. All uses must comply with OpenAI’s own usage policies.
The contract is part of "OpenAI for Government," a new initiative to consolidate the company’s AI offerings for public-sector clients in the US. Existing partnerships with agencies like NASA, the NIH, the Air Force, and the Treasury Department will be brought together under this program.
Through the initiative, federal, state, and local agencies can access OpenAI’s models in secure, compliant environments. OpenAI is also offering custom AI models for security-related use cases and direct technical support. The first pilot project will run with the Defense Department’s Chief Digital and Artificial Intelligence Office (CDAO).
Ethical guidelines limit government use
OpenAI emphasizes that all government applications must follow its own usage guidelines. These prohibit uses like facial recognition without consent, biometric categorization by sensitive attributes, and emotion tracking in the workplace. Automated decisions in areas like migration, lending, or infrastructure management are not allowed unless reviewed by qualified personnel.
OpenAI also bans developing or using its services for weapons, violence, killing, or property destruction. The use of its models for self-harm, aiding cyberattacks, or unauthorized surveillance is strictly off-limits. Political influence operations, including targeted campaigns or disinformation, are also forbidden.