OpenAI Strikes Pentagon Deal for Classified Networks

OpenAI announced Friday (Feb 27) a historic agreement with the U.S. Department of Defense (now called the Department of War by the Trump administration) to deploy its AI models in classified government networks, just hours after President Donald Trump banned federal use of rival Anthropic’s technology.

The Agreement

OpenAI CEO Sam Altman confirmed in an X post: “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.”

The deal allows the Pentagon to use OpenAI’s AI systems for any lawful purpose, including military and intelligence applications. As part of the agreement, OpenAI installed specific technical guardrails to ensure its technologies won’t be used for:

  • Domestic surveillance in the United States
  • Autonomous weapons or lethal weapon systems

Context: The Anthropic vs Pentagon Clash

The agreement follows weeks of tense negotiations between the Pentagon and Anthropic, which had been the first lab to deploy models across the DoD’s classified network.

The Dispute

  • The Pentagon demanded Anthropic allow use of its models for all lawful purposes under a $200 million contract
  • Anthropic requested guarantees its technology wouldn’t be used for mass surveillance of Americans or autonomous weapons
  • Negotiations stalled and failed Friday at 5:01 PM

“Supply-Chain Risk” Designation

After negotiations collapsed, Defense Secretary Pete Hegseth designated Anthropic as a “Supply-Chain Risk to National Security” β€” a label typically reserved for foreign adversaries.

Trump also labeled the startup a “radical Left AI company” and directed all federal agencies to “immediately cease” all use of Anthropic’s technology.

OpenAI’s Ethical Safeguards

According to Altman, OpenAI shares the same “red lines” as Anthropic. In his post, he highlighted:

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

OpenAI will build technical safeguards to ensure its models behave as expected, and deploy personnel to help with its models and ensure their safety.

Altman also asked the DoW to offer these same terms to all AI companies, stating that “in our opinion, we think everyone should be willing to accept.”

Analysis

This deal represents a massive strategic victory for OpenAI on two fronts:

1. Commercial

  • Access to the Pentagon’s enormous contract budget
  • First-mover advantage in classified government networks
  • Direct competitive disadvantage against Anthropic

2. Political

  • Alignment with the Trump administration
  • Strategic differentiation from Anthropic, which had been criticized for months for being “overly concerned with AI safety”

Anthropic’s Reaction

Anthropic stated it was “deeply saddened” by the Pentagon’s decision to designate the company as a supply-chain risk. The company said it intends to challenge that designation in court.

Outlook

This agreement marks an inflection point in the 2026 AI industry:

  1. Accelerated Militarization: Frontier AI is now officially part of U.S. military infrastructure
  2. Strategic Differentiation: AI companies must decide between flexibility (like OpenAI) and stricter safeguards (like Anthropic)
  3. Regulatory Pressure: The “supply-chain risk” designation creates a concerning precedent for the industry

Anthropic stated it intends to continue defending its safety principles, even if that means losing access to significant government contracts.

Sources