The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Anthropic, the Pentagon, and the Defense Production Act
Prof. Alan Rozenshtein (Minnesota) has a very interesting item on this today at Lawfare; I'm not an expert on the subject, so I can't offer an independent evaluation, but I thought it was worth passing along. (Let me know, please, if you can suggest some contrary views that are also credible and worth passing along.) An excerpt, but you should read the whole thing:
On Tuesday, Feb. 24, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and threatened to invoke the Defense Production Act (DPA) if Anthropic doesn't agree to the Pentagon's terms by Friday. The DPA, Hegseth warned, would let the government compel Anthropic to provide its technology on the Pentagon's terms. Anthropic is resisting allowing its artificial intelligence (AI) to be used for autonomous weapons or mass surveillance—two red lines that the company has maintained since entering the defense market.
I argued last week that Congress—not the Pentagon or Anthropic—should set the rules for military AI. The DPA threat makes that case stronger. But first, it's worth understanding what the DPA can actually do here, because the answer depends entirely on what the government is demanding. The legal analysis is genuinely complicated: Different demands raise very different legal questions, and a statute whose core compulsion powers were designed for steel mills and tank factories maps awkwardly onto a dispute about AI safety guardrails….
The DPA is a Korean War-era statute that gives the president broad authority to direct private industry in the name of national defense. It has been extended many times since its enactment, most recently through September 2026.
The DPA already applies to AI. The Biden administration's since-rescinded Executive Order 14110, Section 4.2, invoked the DPA to require AI companies to report on training activities, red-team results, and model weights. But President Biden used Title VII, which contains the DPA's information-gathering authority. Based on the available reporting, Hegseth is likely threatening Title I—the statute's core compulsion power. That's an enormous escalation.
Biden's precedent cuts both ways for Anthropic. It makes it harder for the company to argue the DPA doesn't reach AI at all. But establishing that AI falls within the statute's scope doesn't mean every demand is lawful. The range of possible demands under Title I is enormous, and the legal analysis is different for each….
The legal analysis depends on what the government actually demands, and two possibilities stand out. Anthropic's contract with the Pentagon includes usage-policy restrictions—contractual guardrails that prohibit applications such as autonomous weapons and mass surveillance.
The Pentagon originally agreed to these terms. But in January, Hegseth's AI strategy memorandum directed that all Defense Department AI contracts incorporate standard "any lawful use" language within 180 days—a direct collision with Anthropic's restrictions. The government might now demand that Claude, Anthropic's frontier AI model, be provided without those contractual guardrails, while leaving the model itself untouched. Or it might go further and demand that Anthropic retrain Claude to strip the safety restrictions out of the model entirely….
The demand most likely at issue is that the government wants Claude without Anthropic's contractual usage-policy guardrails. Here the [legal] characterization question is genuinely contested, and each side's statutory argument flows from how it characterizes the demand….
The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms….
But the deeper problem continues to be that this fight is happening because Congress hasn't set substantive rules for military AI. If Congress had legislated guidelines on autonomous weapons and surveillance, Anthropic would likely be far more comfortable selling its systems to the military—and the DPA threat would have never arisen. The question of what values to embed in military AI is too important to be resolved by a Cold War-era production statute.
Show Comments (4)