WASHINGTON: OpenAI has reached an agreement with the U.S. Defense Department to deploy its artificial intelligence models on the military’s classified cloud networks, the company said late Feb. 27, as President Donald Trump ordered federal agencies to stop using technology from rival AI firm Anthropic.

OpenAI Chief Executive Sam Altman said the department, which the Trump administration refers to as the “Department of War” as a secondary title, agreed to include safeguards in the arrangement. Altman said the agreement reflects prohibitions on domestic mass surveillance and requires human responsibility for the use of force, including in autonomous weapon systems, and he said OpenAI will build technical controls intended to keep its models operating within those parameters.
Trump’s directive, issued the same day, instructed federal agencies to cease work with Anthropic and begin transitioning away from its products. Trump said the Defense Department and other agencies already using Anthropic systems would have six months to complete a phase-out, while other parts of the government were told to stop using the technology immediately. Trump also said the administration would pursue civil and criminal consequences if Anthropic did not cooperate with the transition.
Supply chain designation
The Defense Department said it would designate Anthropic a supply chain risk, a step that would formally discourage or restrict the use of the company’s technology in Defense Department work. Defense Secretary Pete Hegseth said the designation followed a dispute over whether a private company’s policies could limit military use of frontier AI systems. The department has not released the OpenAI agreement text, and OpenAI has not disclosed which models will be deployed, the scope of access, or the specific classified environments covered.
Anthropic said it would challenge any supply chain risk designation in court and said it would not change its position opposing mass domestic surveillance and fully autonomous weapons. In a company statement, Anthropic said a supply chain risk designation under 10 U.S.C. 3252 can apply to the use of its Claude models in Defense Department contracts and does not govern how contractors use its systems for work outside Defense Department contracts. Anthropic said its tools have been used for national security-related work, including deployments on government-approved infrastructure.
Existing defense AI programs
The OpenAI and Anthropic developments follow a broader Defense Department push to bring commercial AI tools into military and enterprise operations. In July 2025, the department’s Chief Digital and Artificial Intelligence Office announced separate awards with ceilings of up to $200 million each to Anthropic, Google, OpenAI, and xAI to prototype “frontier AI” capabilities for national security applications. Anthropic has separately said it received a two-year prototype agreement with a $200 million ceiling to develop and test AI capabilities for defense operations.
Earlier this month, OpenAI for Government said it would deploy a custom version of ChatGPT on GenAI.mil, the Defense Department’s secure enterprise AI platform for unclassified work used by about 3 million civilian and military personnel. OpenAI said the system runs in authorized government cloud infrastructure with built-in safety controls, and that data processed on GenAI.mil remains isolated to the government environment and is not used to train or improve OpenAI’s public or commercial models. The newly announced classified-network agreement adds a separate deployment track as agencies move to comply with the administration’s Anthropic phase-out order. – By Content Syndication Services.
