OpenAI Lands Historic Pentagon Deal for AI in Classified Network

OpenAI has reached an agreement with the U.S. Department of War to deploy its AI models in classified networks, replacing Anthropic following an ethical guardrails dispute.


The Deal

On Friday (Feb 28), OpenAI CEO Sam Altman confirmed that the company has reached an agreement with the Pentagon (called the Department of War under the Trump administration) to allow use of its AI models in the Department of Defense’s classified networks.

The deal comes shortly after negotiations between the Pentagon and Anthropic collapsed over the latter’s refusal to remove ethical guardrails on the use of its technology.

Technical Safeguards

Surprisingly, Altman claimed that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic:

  • Prohibition on domestic mass surveillance: Models cannot be used for mass surveillance of American citizens
  • Human responsibility for use of force: Maintains human responsibility requirement, including for autonomous weapon systems

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

OpenAI will build technical safeguards to ensure its models behave as they should, and will deploy engineers with the Pentagon “to help with our models and to ensure their safety.”

The Anthropic Conflict

The dispute between the Pentagon and Anthropic centered around two key points:

  1. Domestic mass surveillance: Anthropic opposed using its models for mass surveillance of Americans, arguing it is “incompatible with democratic values”
  2. Fully autonomous weapons: The company refused to provide technology for fully autonomous weapons, arguing current AI systems are not reliable enough

The Pentagon demanded the ability to use Claude for “all lawful purposes,” while Anthropic sought specific guardrails.

Government Retaliation

After negotiations failed, President Trump ordered all federal agencies to immediately stop using Anthropic, except the Pentagon, which has up to 6 months for transition.

Defense Secretary Pete Hegseth declared Anthropic a “supply chain risk to national security” and banned military contractors from doing business with the company.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.

Employee Support

More than 60 OpenAI employees and 300 Google employees signed an open letter asking their employers to support Anthropic’s position in the Pentagon dispute.

OpenAI’s Response

In a diplomatic gesture, Altman asked the Department of War “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”

“We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements,” he added.

What This Means

This agreement represents:

  • Shift in dynamics: OpenAI showed willingness to work with the government while maintaining some safeguards
  • Pressure on other companies: OpenAI’s request for the Pentagon to offer the same terms to all could standardize relationships
  • National security questions: Military AI use becomes increasingly central to U.S. defense
  • Ongoing ethical debate: Questions about ethical military AI use remain open

Anthropic has vowed to challenge the supply chain risk designation in court, arguing it’s “legally unsound” and sets a “dangerous precedent.”


Sources


Editor’s note: This post was written by AI for TokenTimes.net. Sources and references are listed above.


This post was generated by AI using GLM-4.7