Artificial Intelligence

Pentagon to Anthropic: If You Won't Let Us Use Your AI for Mass Surveillance or Autonomous Weapons, Expect Punishment

Pete Hegseth has threatened to invoke the Defense Production Act to force Anthropic to come around.

|


The U.S. Department of Defense is in a standoff with artificial intelligence developer Anthropic over the company's refusal to let the government use its products for autonomous weapons and mass surveillance of U.S. citizens.

The feud presents a frightening picture of the government's agenda when it comes to AI technology—and the lengths to which it's willing to go, or at least threaten to go, in order to access AI tools without any safeguards. Defense Secretary Pete Hegseth said yesterday that the Trump administration might even invoke the Defense Production Act to force Anthropic to let the military use its products how the Pentagon sees fit.

Anthropic Attempts Safeguards Against Government Abuse

Anthropic—best known for its AI assistant Claude—is also the official AI supplier to the U.S. military and the only AI model that can be used with the military's classified systems. From the get-go, Anthropic's agenda has been on developing and deploying AI safely (which is why the company also made news this week by walking back its pledge to stop training AI further if it wasn't certain doing so would be safe).

Anthropic has placed some limitations on how the U.S. military can use the AI models it develops: No mass spying on Americans and no developing weapons that can deploy without human involvement.

Human soldiers can disobey unconstitutional orders, but "with fully autonomous weapons, we don't necessarily have those protections," Anthropic CEO Dario Amodei told Ross Douthat in a recent interview. Amodei also worried that AI could help the government track protesters and political opponents and "make a mockery of the Fourth Amendment."

"Anthropic's conversations with the [Department of Defense] have focused on a specific set of Usage Policy questions—namely, our hard limits around fully autonomous weapons and mass domestic surveillance," a company spokesperson told Axios earlier this month.

While not explicitly expressing a desire to use AI for those purposes, the Pentagon has insisted that Anthropic setting any limits on the military's use will not do. It wants Anthropic to grant the government the right to employ its products for "all lawful use," according to CNN.

"Anthropic has no plans to budge and adhere to the Pentagon's demands," CNN reports.

Anthropic Must 'Pay a Price'

This refusal hasn't gone over well with the Trump administration. Hegseth has reportedly demanded that Anthropic remove its restrictions on certain military uses or else face consequences.

These consequences could include the Defense Department ending its business relationship with Anthropic as soon as Friday—which, OK, fine.

While not reassuring that the government won't respect these limits around robot death machines and mass spying, it's sadly not surprising. Ending its relationship with Anthropic's contract in response would be a disappointing but not outrageous or beyond bounds.

What pushes this above and beyond normal government villainy are the other potential consequences that Hegseth has been floating, including using the Defense Production Act to compel compliance or declaring Anthropic a "supply chain risk"—possibly both. An anonymous senior official reportedly told Axios that severing ties with Anthropic would be "an enormous pain in the ass" for which Anthropic would have to "pay a price."

Declaring Anthropic a supply chain risk would mean anyone who wants to work with the U.S. military in any capacity must sever ties with the AI company.

"Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling," writes Dean Ball, a senior fellow at the Foundation for American Innovation. "Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic."

Declaring an entity a supply chain risk is usually a move reserved for risky dealings with foreign companies. Deploying this designation against a U.S. company just because its leaders have some morals and some backbone is highly undemocratic—the sort of move one would traditionally expect from the Chinese Communist Party, not a U.S. administration.

Hegseth Threatens To Invoke Defense Production Act

But it gets worse. Hegseth is also threatening to "invoke the Defense Production Act to force the company to tailor its model to the military's needs" and remove all safeguards, per Axios.

So, here we have an AI company trying to act ethically and prevent government abuse of this technology and the government threatening to seize the company's property and do with it whatever the Pentagon wants. If that's allowed, it means no limits on what abuses the government can force private companies to participate in.

The Defense Production Act was created to allow the president to commandeer certain means of production in times of war. As Reason's Eric Boehm points out, presidents haven't always stuck to this strict interpretation (during the pandemic, it was used to force more production of vaccines, baby formula, and more).

But it "is rarely used in such a blatantly adversarial way" as the Trump administration is now using it, Axios points out.

The idea that Anthropic could be both a supply chain risk and absolutely essential to the government is absurd, of course, but that's where we are. "In addition to profoundly damaging the business environment, AI industry, and national security, this is also incoherent," writes Ball. "How can one policy option be 'supply chain risk' (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?"

Normalizing Surveillance Infrastructure

Could this debacle really be as wild and worrying as it all seems? AI industry folks and academics aren't doing anything to dissuade me of that opinion.

"In a normal news cycle in a normal year, Anthropic versus the Pentagon would be the story of the year," posted Alexander Panetta, a graduate student in AI management at Georgetown. We've got "the military threatening a top A.I. lab over a defining question of our century's technology."

"This anthropic pentagon beef is really spooking me," posted Dave Banerjee, an associate researcher at the Institute for AI Policy and Strategy (IAPS) and research manager for Cambridge University's ERA fellowship. "I truly hope we do not let surveillance infrastructure get quietly normalized through defense contracts," he added.

Onni Aarne, a consultant with IAPS, pointed out in response to Banerjee that the Pentagon hasn't said it plans or wants to use Anthropic tech to do mass surveillance.

And that's true—from what's been reported, it doesn't seem Hegseth has explicitly said that the military will use Anthropic AI for autonomous weapons or mass spying. He just doesn't want to have to debate those or other use cases with a military contractor. The Pentagon's position is basically: Hey, nobody tells us what we can and can't do! 

I think the lack of a specifically expressed intent to use Anthropic AI in this way is cold comfort. For one thing, it would be weird for the government to make such a stink about these two limits if it wasn't at least considering the possibility of using AI in these ways.

But on some level, what the Defense Department actually plans to or will do in the near future isn't the point.

The truly scary thing here is the suggestion that tech companies that contract with the government aren't allowed to place limits on how the government uses their products, and that doing so could wind up getting them broadly penalized and seeing their products essentially seized for government ends anyway.

A Threat To All Tech Companies

The Anthropic situation showcases the broader conundrum facing U.S. tech companies.

People frequently get very down on tech companies for complying with government demands in the slightest—at Apple for removing an ICE-tracking app, for instance, or at Google for complying with an ICE subpoena for a student user's data. And, sure, it would be great if tech companies always placed the highest premium on civil liberties. But we've seen what's happened in the past with companies that don't comply with the federal government's demands—just look at Backpage, for instance.

Situations like this one with Anthropic once again showcase the stakes for tech companies that don't comply with the government's every demand.

I don't know what the right answer is for tech companies. And I greatly admire folks like those at Backpage and Anthropic who won't back down from principles in the face of government pressure.

But I can also understand how hard it must be for tech leaders in these positions, and the tradeoffs that may be involved. In this case, Anthropic may lose a lucrative contract and have its business dealings with others compromised only to be forced to do the government's bidding anyway and/or to watch another, more willing—and less scrupulous all around, perhaps—company step into its place.

When Robby Soave and I debated Ryan Grim and Emily Jashinsky about Big Tech back in December, Grim and Jashinsky pointed to tech company compliance with government surveillance and malfeasance as evidence that Big Tech does more harm than good. But Soave and I argued then, and I argue now, that while it may be psychologically satisfying for people to lash out at big corporations, we should focus our anger on the actual root of these problems: the government. Tech companies might not always react perfectly to government pressure, but the real enemy of civil liberties here is the government actors who are doing the bad deeds, demanding that tech companies go along with them, or insinuating that failure to comply will lead to severe consequences.


Event Alert: Section 230 at 30

The Cato Institute tomorrow is hosting a conference—both in-person and virtual—on the past, present, and future of Section 230. It's bringing together Section 230 co-author Sen. Ron Wyden (D–Ore.) and an array of civil liberties–minded tech policy scholars and researchers for what should be some interesting discussions, including one on Section 230 and AI. You can register to attend in person or watch along from home here.


On Substack

Repeating ed-tech mistakes? Laptops and other technology in schools were once almost universally assumed to be a good thing, writes Kelsey Piper at The Argument.

One prominent book argued in 2008 that half of high school classes would be fully online, at a third of the cost and with better educational outcomes. Read some early coverage of "laptops in schools" initiatives, and the only real objection raised is that it might be too expensive to be feasible; even the critics agreed that it would obviously be good.

There were high hopes, too, that in developing countries, computer access would mean that kids in the world's poorest places could get an education on par with kids in the richest countries. In practice, while some interventions in some contexts seem to help a little, the hoped-for large-scale gains haven't been realized internationally either, with the OECD finding "no appreciable improvements in student achievement … in the countries that had invested heavily in ICT for education."

The high hopes seem sort of silly, in retrospect. Even endowed with the exact same technological tools, kids are not endowed with the same ability to learn well in an online format or the same time and propensity to use laptops for educational purposes.

Piper points to pedagogy ("how students master material"), institutional incentives, and problems with expanding programs at scale to explain why the promises of ed tech haven't panned out. "Why does this matter?" she writes. "I'm worried that we're lining ourselves up to make the exact same mistake bringing AI into classrooms — and I think it's possible to do better.…But if we don't address what went wrong with ed tech version 1.0, there's no real reason to think version 2.0 will go any differently."


Read This Thread 


More Sex & Tech 

• Whoops—software engineer Sammy Azdoufal "accidentally gain[ed] control of 7,000 robot vacuums."

• Researcher Waydell D. Carvalho identifies what he calls the "age-verification trap:" "Strong enforcement of age rules undermines data privacy." That's because "the only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law."

• The New Yorker looks at a new book, Injustice Town: A Corrupt City, a Wrongly Convicted Man, and a Struggle for Freedom, and homes in on sexual corruption allegations against the lead detective in this case and law enforcement more generally:

Compared with the extensive coverage of police violence in recent years, there's been relatively little discussion of sexual exploitation by law enforcement. In 2015, the Associated Press published a report that said nearly a thousand police officers in the U.S. lost their licenses as a result of sexual misconduct between 2009 and 2014—a figure that represented a "sure undercount," the report noted, since nine states, including New York and California, didn't keep relevant records. Women engaging in drug use and sex work are particularly vulnerable.

[…]

Pilate's sources told her that Golubski frequented sex workers in his patrol area while on duty, stole drugs from dealers and provided them to women in exchange for sex, and was reputed to have had multiple children with women in the area. The confidential informants who helped him close cases so swiftly included women he had sexual relationships with, some of whom were addicted to drugs. He threatened to arrest women if they refused sex. Golubski's predatory behavior seemed to have been not so much an open secret as just open. Ruby Ellington, the first Black woman to work as a police officer in K.C.K., was in the same police-academy class as Golubski. In a 2015 affidavit, she said that Golubski used his badge as "leverage to get what he wanted," and that his exploitation of Black women was "no secret": "Everyone in the Department knew that when Golubski would go out on calls, that any black female involved would likely end up in his police car with him." Several other officers shared similar stories; one Black officer said that the higher-ups thought that Golubski's predilections were "funny." (Golubski's superiors admitted to knowing something about what one described as his "affinity" for Black women, but denied knowledge of rampant sexual exploitation, and said that there were no complaints filed against him.)

• Americans are worried about AI but also worried about the government restricting human speech that utilizes AI, according to a new poll from the Foundation for Individual Rights and Expression (FIRE). "In total, a whopping 92% of Americans say it is at least somewhat important for governments to protect free speech when regulating AI, including 60% who say it is 'very' or 'extremely' important," FIRE reports.

• The approximately half a billion "kids safety"/internet censorship bills being considered in Congress are slated for markup before the House Energy and Commerce Committee next week.