Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts
OpenAI has entered a contract with the Defense Department allowing all lawful use of ChatGPT after Anthropic refused to remove its restrictions on domestic mass surveillance and fully autonomous weapons systems.
President Donald Trump banned all federal agencies from contracting with Anthropic after the company refused to remove its restrictions on domestic mass surveillance and fully autonomous weapons systems from its AI model, Claude. Later that same day, OpenAI CEO Sam Altman announced that the Defense Department had agreed to use ChatGPT supposedly subject to the very same restrictions.
The acrimony between the federal government and Anthropic, which had been brewing for two months, reached a boiling point on Tuesday, when Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei met to discuss the terms of the Defense Department's contract with Anthropic. Hegseth demanded that Anthropic equip the Pentagon with an AI model "free from usage policy constraints that may limit lawful military applications" or be labeled a supply chain risk and possibly nationalized.
Amodei refused, publishing a public letter Thursday night explaining his constitutional, ethical, and technical concerns about unleashing AI capable of surveilling American citizens at home and executing people abroad according to its own judgment instead of a human being's.
On Friday evening, Hegseth made good on his promise. Following Trump's declaration that "EVERY Federal Agency in the United States Government…IMMEDIATELY CEASE all use of Anthropic's technology," Hegseth directed the Defense Department to designate Anthropic a supply chain risk to national security. This means that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," according to Hegeth's statement.
Mark Dalton, senior director of technology and innovation at the R Street Institute, says the Pentagon is guilty of a glaring contradiction: "consider[ing] Anthropic's technology so vital to national defense that they thought that invoking the Defense Production Act was justified to retain access" earlier this week, and then "suddenly [designating the company] a supply chain risk."
Anthropic is such a national security that the president is permitting a six-month window during which federal agencies, including the Pentagon, will continue using Claude.
Dalton warns that "the next time the designation is applied to a company with actual ties to a foreign adversary, the credibility to make that case will be diminished." Dean Ball, senior fellow at the Foundation for American Innovation, described the designation as "the most damaging policy move I have ever seen USG try to take."
It's unclear why Anthropic is just now being deemed a national security threat. The company's usage policy has explicitly prohibited the use of its AI models for domestic surveillance purposes and to "produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life" since June 2024, when it began supporting American warfighters. (These provisos are maintained in Anthropic's current usage policy.) The company received its $200 million Pentagon contract over a year later, in July 2025.
Neither the president nor the secretary of defense were ignorant of these usage conditions.
Trump implied that the Pentagon had stipulated to these terms of services in his Friday Truth Social post: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution" (emphasis added). Hegseth did likewise, stating that "the Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield."
Neither Trump nor Hegseth have claimed that Amodei changed Anthropic's terms of service to commandeer foreign policy and military decisions from the federal government.
Friday night, OpenAI CEO Sam Altman announced that ChatGPT will replace Claude at the Pentagon after he "reached an agreement with the Department of War to deploy our models in their classified network." Altman claimed that the Defense Department agrees with OpenAI's "most important safety principles…prohibitions on domestic mass surveillance and human responsibility for the use of force…and we put them into our agreement."
Such an agreement would be identical to the one for which the Pentagon declared Anthropic a supply chain risk.
About an hour after Altman's announcement, Under Secretary for Foreign Assistance Jeremy Lewin clarified that the terms of the OpenAI-Pentagon contract flow "from the touchstone of 'all lawful use' that DoW has rightfully insisted upon & xAI agreed to."
In a statement responding to Hegseth's comments, Anthropic stated that "no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," and that the company "will challenge any supply chain risk designation in court."
Show Comments (2)