A First Amendment Right Not To Use AI for Evil?
Anthropic sues the federal government—and kicks off a debate about free speech for artificial intelligence systems.
Anthropic is suing the federal government over its response to the company refusing to remove safeguards that prevent Anthropic's artificial intelligence system, Claude, from being used for mass domestic surveillance and killer robots.
In a lawsuit filed Monday, it accuses the Trump administration of illegal retaliation. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," states the complaint.
The suit has kicked off a new round of debate about free speech for AI systems more broadly, in addition to raising critical questions about the government's ability to compel tech companies to act in ways that company leaders consider unethical.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
'Public Castigation' and Retaliation
"When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to 'IMMEDIATELY CEASE all use of Anthropic's technology'—even though the [Department of Defense] had previously agreed to those same conditions," states Anthropic's complaint, filed in the U.S. District Court for the Northern District of California. "Hours later, the Secretary of War directed his Department to designate Anthropic a 'Supply-Chain Risk to National Security,' and further directed that 'effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.'" (For more background on all this, see here and here.)
Rather than simply ending Anthropic's military contract over this dispute, the Trump administration went on a campaign of "public castigation," complains Anthropic.
Trump called it a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and directed "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." A top Department of Defense official called Anthropic CEO Dario Amodei "a liar" with a "God-complex" who was trying "to personally control the US Military" and was "ok putting our nation's safety at risk."
This was followed up by Defense Secretary Pete Hegseth declaring Anthropic a supply-chain risk and federal agencies across the board terminating their contracts with the company.
I don't think there's any disputing that this was an absurd and bullying overreaction, injurious to free markets and unbecoming of a free and democratic country. No company should be compelled to let the U.S. military use its tech tools for whatever authorities want, and no company should be retaliated against for this refusal.
But the grounds on which Anthropic is suing are interesting—and controversial. The company argues that in addition to violating federal administrative law, the administration had attacked its "core First Amendment freedoms."
"The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety," states its complaint. "The government does not have to agree with those views. Nor does it have to use Anthropic's products. But the government may not employ 'the power of the State to punish or suppress [Anthropic's] disfavored expression.'"
Is Claude Protected Speech?
A group of organizations friendly to civil liberties and the First Amendment—including the Foundation for Individual Rights and Expression (FIRE) and the Cato Institute—have filed a court brief in support of Anthropic's position, arguing that "the Pentagon's temper tantrum is a textbook violation of Anthropic's First Amendment rights."
According to these groups, it's not just statements by Anthropic leaders that are protected—it's the AI system itself.
"Claude is fundamentally expressive," their brief states. "The Pentagon's demand that Anthropic remove safeguards on that system—to change what Claude must and may say, analyze, and refuse—asks Anthropic to make a trade on a core freedom of expression."
Anthropic makes a similar claim in its complaint, suggesting that First Amendment protection "extends to its Usage Policy," which "has never permitted Claude to be used for mass surveillance of Americans or for lethal autonomous warfare."
But is the policy governing Claude's outputs really speech, or a form of conduct?
Are all AI systems speech?
These are thorny questions First Amendment experts are still hotly debating.
University of Akron law professor Jess Miers is on Anthropic's side on this one. We have an existing body of case law that says "that curating and disseminating expression (even via algorithms) is a protected editorial activity," and "that's precisely" what AI model developers like Anthropic do, Miers posted to BlueSky.
"Model developers meticulously curate the datasets that they deem important for shaping the model's 'worldview,'" Miers pointed out. "Those choices alone are editorial: what kind of information do I want my model to train on? How much of it? What sources do I trust? The data curation decisions shape the outputs."
As Miers sees it, "DOD is effectively trying to force Anthropic to make different editorial decisions that reflect the views and goals of the Administration."
Some think this is taking things too far.
"We don't want everything an AI does to be covered by the First Amendment," posted University of Minnesota law professor Alan Rozenshtein. "It will make regulation of what will increasingly be large portions of the economy impossible."
"It's true that AI output will often be protected speech, but that's because it will implicate *listener's* ability to access AI output," Rozenshtein continued. "But here the AI output is primarily being used as *conduct* for use in government military systems. Anthropic absolutely has a First Amendment right to not be punished for its public statements. But the government has to have the right not to use a tool because it doesn't like its output, and that's impossible if the output is itself First Amendment."
Other Anthropic Speech Unquestionably Protected
It's possible that a court need not decide whether AI outputs are protected speech to find a First Amendment violation here.
Anthropic's public statements about AI limits and safeguards and so on are obviously protected. So are its statements and petitions to the government.
And there's at least a case to be made that the Trump administration went so hard after Anthropic precisely because of its very vocal rejection of what the administration was asking it to do.
One could argue that an objection to the limitation on Claude's outputs motivated terminating Anthropic's contract with the military, and that's OK. But the remarkable public vitriol and the administration's above-and-beyond punishment hinged on the fact that the company said no to the government forcefully and publicly—and that's not OK.
The administration's "needless and extraordinarily punitive actions, imposed in broad daylight, are a paradigm of unconstitutional retaliation," Anthropic suggests in its complaint. They were "designed to punish ideological disagreement."
"As limitations go, refusing to participate in the creation of a totalitarian police state or the production of killer robots seem reasonable lines to draw," notes J.D. Tuccille. But whether the lines are reasonable or not doesn't really matter—the government can "respect those limits or take its shopping needs elsewhere." Instead, Trump and his allies chose a third option: throwing "public temper tantrums over Anthropic telling them 'no.'"
The Trump administration's overblown statements and the fact that it's not just ending the defense contract but trying to prevent others from doing business with the company (through the supply-chain risk designation) make clear that it was punishing Anthropic "for its corporate beliefs," suggests Tuccille.
Event Alert
Come watch me interview an AI avatar (and some humans) about orgasmic meditation and more. I'll be moderating a panel in New York City tomorrow night about the case against former OneTaste leaders Nicole Daedone and Rachel Cherwitz and the demonization and regulation of alternative practices and beliefs more broadly.
In a first for me, one of the three panelists will be an AI avatar, since the person it represents, Daedone, is currently in federal prison; she was denied bail while awaiting sentencing (which is scheduled for later this month). Her AI avatar was trained on her books and lectures and "isn't a mere chatbot or a summary — it's a distillation of Nicole's actual thinking, language, and philosophy, able to engage in real conversation about the ideas she has spent a lifetime developing," per the event organizer's summary. I have no idea what to expect, but it should be a fun experiment, nonetheless.
The free event takes place in Harlem and starts at 7 p.m. More info here.
In the News
New: A Houston woman is suing Tesla in Harris County, alleging that her Cybertruck, while using Tesla's "Full Self-Driving mode" tried to drive the car off of a bridge. Here is the dashcam footage provided by her lawyers: www.chron.com/culture/arti…
— gwen howerton (@kissphoria.bsky.social) 2026-03-09T19:06:48.930Z
On Substack
What bank tellers and iPhones can teach us about AI. David Oks looks at why ATM machines didn't destroy bank teller jobs—but iPhones did. Since the first decade of this century, "bank teller employment has fallen off a cliff," a situation Oks attributes to smartphones. Oks has a theory about why:
When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it's quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It's only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant.
Could this theory have relevance for AI automation and jobs today?
The lesson is worth stating plainly. The ATM tried to do the teller's job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller's job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn't need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That's because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks.
This has, I think, serious implications for how we're thinking about AI.
People in AI frequently talk about the vision of AI being a "drop-in remote worker": AI systems that can be inserted into a workflow, learn it, and eventually do it on the level of a competent human. And they see that as the point where you'll start to see serious productivity gains and labor displacement.
[…] But I'm skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the "drop-in remote worker," but from something like Dwarkesh Patel's vision of the fully-automated firm.
More here.
Read This Thread
After years of the media uncritically accepting the "something must be done for the children" narrative, it's nice to see some of them waking up to the reality we've been warning about from the start. https://t.co/AJkllMeEZQ
— Ari Cohn (@AriCohn) March 9, 2026
More Sex & Tech News
• An app that pledged to help men overcome pornography "addiction" wound up leaking "intimate data on hundreds of thousands of its users, including their masturbation habits, and lied about its security issues," 404 Media reports.
• "Red states get Waymos. Blue states get studies": Kelsey Piper on the culture of stalling in progressive government.
• OpenAI is delaying its "adult mode" for ChatGPT rollout.
• "Another meta-analysis finds near zero effects for screen time," notes psychologist Chris Ferguson:
Indeed another meta-analysis finds near zero effects for screen time.
This, despite many of the effect sizes being bivariate correlations, and the authors acknowledging many of the longitudinal studies failed to correct for the Time 1 outcome variable…a very basic control,… https://t.co/cbPFAZiVq5
— Chris Ferguson ????????☘️???????????????????? (@CJFerguson1111) March 10, 2026
• Real estate moguls Alon, Oren, and Tal Alexander were convicted on Monday of federal sex trafficking charges.
• Social media restrictions for minors in Florida and Georgia went to court this week. In the Georgia case, federal appellate judges appeared skeptical of the constitutionality of a law requiring minors to get parental permission to be on social media. The same judges are also considering Florida's House Bill 3, which bans or restricts social media account creation for minors; Courthouse News Service has a rundown of yesterday's oral arguments.
• The Whore D'ouvres newsletter explores the difference between defending "rape fantasies"—better termed "consensual non-consent"—and defending rape.
• Major publishers are suing the shadow library Anna's Archive.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
>>Are all AI systems speech?
lol Jackson can't even tell you why she's a chick & you want her deciding whether AI is speech?
So Anthropic demands money for refusing to provide the services under contract? Or do they demand money for interfering with the contract they rejected by contracting services that belong under the contract they refused? While I can understand the outrage, it's not quite the winner you think.
It was a contract negotiation. Anthropic did not like the terms the DoD was insisting on and thus they did not sign the contract. Then the DoD had a hissy fit.
I think Colonel Nathan R. Jessup said it best:
"YOU CAN"T HANDLE THE TRUTH!"
"Rather than simply ending Anthropic's military contract over this dispute, the Trump administration went on a campaign of "public castigation," complains Anthropic.
Trump called it a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and directed "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." A top Department of Defense official called Anthropic CEO Dario Amodei "a liar" with a "God-complex" who was trying "to personally control the US Military" and was "ok putting our nation's safety at risk.""
Trump insulting the company and its officers is unpleasant, maybe even arguably defamatory, but it is not a free speech violation.
"DOD is effectively trying to force Anthropic to make different editorial decisions that reflect the views and goals of the Administration."
Is trying to get the company to provide the product or services DOD contracted it to provide an unreasonable demand?
If Anthropic is going to balk at its products being used in a way that it objects to, then does DOD have an interest in Anthropic's products not being used by any DOD contractor?
It would appear the DOD is indeed taking its business elsewhere. All of its business.
ENB is totally full of shit, ain't she? I believe the root cause is a raging case of juvenile TDS.
Let's pretend the federal government is the federal government, and Anthropic is the American Public during the Communist Chinese Virus panic.
Does that change anything?
I do not see how that is analogous to this situation.
By connecting the ban on their product to their politics makes it a 1A issue. Trump effectively proved Anthropic's case for them.
Anthropic excused not living up to their contract on their politics.
At best, it is a chicken and egg argument.
Have you seen the actual contract? How do you know they violated it?
So you are acting like the BBC suggesting the Department of War is a secondary name for the Department of Defense and decided to keep using Department of Defense.
Please FFS, cite the actual request from the Department of War made to Anthropic relating to the use of it's AI product already owned by the Department of War.
OpenAI just signed an agreement for advanced AI tools and the agreement completely disallows usage for Mass domestic surveillance.
The name of the Department of Defense is written into federal law. Trump can't change that himself. Same with the Kennedy Center.
I buy a laptop computer. There's a seal that if broken voids the warranty.
Does that mean I am not able to unseal it, modify it on my own?
Does that mean I must go to the manufacturer or their authorized shop to modify the laptop or can be sued if I don't?
Does that mean if I decide to not use the product after requesting to modify it to my liking and instead remove the product usage from all my companies departments that I can be sued because I elected to go with another product?
Does that mean I can't write a review about the product and suggest no one buys it?
Please cite the exact language used where the Department of War requested Anthropic AI to be used for evil.
It might mean that if you go to a website, or create art that the manufacturer of your computer objects to, then they have the right to brick your computer.
Major publishers are suing the shadow library Anna's Archive.
Of note:
1. Anna's Archive nominally started itself as a way for students to access library and University course books without paying onerous fees to Universities and publishers *on top of tuition*.
2. If OpenAI scrapes every book from Anna's Archive and beyond, to teach ChatGPT how to think, it's (arguably) fair game.
3. Spotify and a group of major music publishers filed a related New York lawsuit against Anna's Archive in January seeking trillions of dollars in damages for pirating tens of millions of audio files. - Neither Spotify nor major music publishers are original creators of the respective works Anna's Archive pirated. Especially given that the majority of the archive is literature.