Judge Rejects 'Orwellian Notion' That Anthropic Is a Supply Chain Risk for Disagreeing With the Government
Judge Rita Lin's preliminary injunction confirms what government officials had implicitly acknowledged: The supply chain risk designation was punishment, not policy.
A federal judge blocked the government from banning Anthropic from federal contracting on Thursday. The ruling confirms what the Defense Department's continued use of Anthropic implies: The AI developer is not a supply chain risk, and disagreement with the government is not subversion.
Judge Rita Lin's order reverses President Donald Trump's February 27 directive for every federal agency "to IMMEDIATELY CEASE all use of Anthropic's technology." It also strikes down Defense Secretary Pete Hegseth's order that his agency designate Anthropic as a "Supply-Chain Risk to National Security" and the March 3 directive that formalized that designation. As Lin writes, her order "restores the status quo."
The dispute between Anthropic and the Defense Department centered on two narrow restrictions in the company's usage policy: Claude, its AI model (which the Defense Department exclusively used), may not be deployed for fully autonomous weapons systems or for mass domestic surveillance. The day preceding Trump and Hegseth's actions, CEO Dario Amodei stated that such uses are "simply outside the bounds of what today's technology can safely and reliably do" and that Anthropic "cannot in good conscience accede to" the agency's demands to permit them.
In response, Trump labeled Anthropic a "RADICAL LEFT, WOKE COMPANY" and Hegseth condemned Anthropic's "defective altruism" before banning Anthropic—and anyone with a commercial relationship with the company—from doing business with the federal government. Reason's Elizabeth Nolan Brown explained that "the administration's above-and-beyond punishment hinged on the fact that the company said no to the government forcefully and publicly—and that's not OK." Lin agrees that "the record supports an inference that Anthropic is being punished for criticizing the government's contracting position," which amounts to "classic illegal First Amendment retaliation."
In addition to arguing that its "core First Amendment freedoms are under attack," Anthropic argued that the Defense Department lacked statutory authority to designate the company a supply chain risk. Federal law authorizes the secretary of defense to exclude a company from defense procurement only when it may be used by an "adversary" to "sabotage" or "subvert" a national security system. Anthropic affirmed it "is not, and has no ties to, an 'adversary'" and the statute does not permit "the Secretary to redefine 'supply chain risk' to cover a contractor who declines to modify its terms of use to track the Department's preferences."
Lin concluded that the Defense Department provided "no legitimate basis to infer…[Anthropic] might become a saboteur" and rejected "the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." While the Defense Department is at liberty to decide with whom to contract, Lin ruled it may not attempt "'corporate murder'" because a firm refuses to kowtow to its every whim or exercises its First Amendment rights in a manner embarrassing to the department.
Dean Ball, senior fellow at the Foundation for American Innovation, celebrated the ruling as a win not only for Anthropic, but "all red-blooded Americans who are, as the founders would have said, 'jealous of their liberties.'" The decision is a huge win for American AI development insofar as it restores investor confidence in the industry.