Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Artificial Intelligence

'Woke' AI Is the Latest Threat to Free Speech

But not in the way you think

Elizabeth Nolan Brown | 7.23.2025 11:00 AM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
Artificial intelligence | Photo by <a href="https://unsplash.com/@googledeepmind?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Google DeepMind</a> on <a href="https://unsplash.com/photos/a-couple-of-pieces-of-luggage-sitting-on-top-of-each-other-ebMFfR2uuJ0?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
(Photo by Google DeepMind on Unsplash )

Politicians seem increasingly intent on modeling artificial intelligence (AI) tools in their own likenesses—and their plans could sneakily undermine free speech.

For Republicans, the ruse involves fighting "woke" AI. The Trump administration is reportedly readying an executive order aimed at preventing AI chatbots from being too politically correct.

You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.

This field is for validation purposes and should be left unchanged.

Conservatives complaining about AI systems being too progressive is nothing new.

And, sure, the way some generative AI tools have been trained and programmed has led to some silly—and reality-distorting—outcomes. See, for instance, the great black pope and Asian Nazi debacle of 2024 from Google Gemini.

Of course, we also have an AI system, Grok, that has called itself MechaHitler and endorsed anti-Jewish conspiracy theories.

That's the free market, baby.

And, all glibness aside, it's way better than the alternative.

When the Cure Is Worse Than the Disease

Both Gemini and Grok have been retooled to avoid similar snafus going forward. But the fact remains that different tech companies have different standards and safeguards baked into their AI systems, and these may lead these systems to yield different results.

Unconscious biases baked into AI will continue to produce some biased information. But the trick to combating this isn't some sort of national anti–woke AI policy but teaching AI literacy—ensuring that everyone knows that the version of reality reflected by AI tools may be every bit as biased as narratives produced by humans.

And, as the market for AI tools continues to grow, consumers can also assert preferences as they always do in marketplaces: by choosing the products that they think are best. For some, this might mean that more biased systems are actually more appealing; for others, systems that produce the most neutral results will be the most useful.

Whatever problems might result from private choices here, I'm way more worried about the deleterious effects of politicians trying to combat "woke" or "discriminatory" AI.

The forthcoming Trump executive order "would dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models, an effort to combat what administration officials see as liberal bias in some models," according to The Wall Street Journal.

That might seem unobjectionable at first blush. But while we might wish AI models be "neutral and unbiased"—just as we might wish the same about TV news programs, or magazine articles, or social media moderation—the fact is that private companies, be they television networks or publishers or tech companies, have a right to make their products as biased as they want. It's up to consumers to decide if they prefer neutral models or not.

Granted, the upcoming order is not expected to try and mandate such a requirement across the board but to stipulate that this is mandatory for AI companies getting federal contracts. That seems fair enough in theory, but in practice, not likely.

Look at the recent letters sent to tech companies by the attorney general of Missouri, who argues that AI tools are biased by not listing Trump as the best president on antisemitism issues.

Look at the past decade of battles over social media moderation, during which the left and the right have both cried "bias" over decisions that don't favor their preferred views.

Look at the way every recent presidential administration has tried to tie education funding to supporting or rejecting certain ideologies surrounding sex, gender, race, etc.

"Because nearly all major tech companies are vying to have their AI tools used by the federal government, the order could have far-reaching impacts and force developers to be extremely careful about how their models are developed," the Journal suggests.

To put it more bluntly, tech companies could find themselves having to retool AI models to fit the sensibilities and biases of Trump—or whoever is in power—in order to get lucrative contracts.

Sure, principled companies could opt out of trying for government contracts. But that just means that whoever builds the most sycophantic AI chatbots will be the ones powering the federal government. And those contracts could also mean that the most politically biased AI tools wind up being the most profitable and the most able to proliferate and expand.

AI Antidiscrimination Also Poses a Threat

Like "woke AI," the specter of "AI discrimination" has become a rallying cry for authorities looking to control AI outputs.

And, again, we've got something that doesn't sound so bad in theory. Who would want AI to discriminate?

But, in practice, new laws intended to prevent discrimination in artificial intelligence outputs could have negative results, as Greg Lukianoff, president and CEO of the Foundation for Individual Rights and Expression (FIRE), explains:

These laws — already passed in states like Texas and Colorado — require AI developers to make sure their models don't produce "discriminatory" outputs. And of course, superficially, this sounds like a noble endeavor. After all, who wants discrimination? The problem, however, is that while invidious discriminatory action in, say, loan approval should be condemned, discriminatory knowledge is an idea that is rightfully foreign. In fact, it should freak us out.

[…] Rather than calling for the arrest of a Klansman who engages in hateful crimes, these regulations say you need to burn down the library where he supposedly learned his hateful ideas. Not even just the books he read, mind you, but the library itself, which is full of other knowledge that would now be restricted for everyone else.

Perhaps more destructive than any government actions stemming from such laws is the way that they will influence how AI models are trained.

The very act of trying to depoliticize or neutralize AI, when done by politicians, could undermine AI's potential for neutral and nonpolitical knowledge dissemination. People are not going to trust tools that they know are being intimately shaped by particular political administrations. And they're not going to trust tools that seem like they've been trained to disregard reality when it isn't pretty, doesn't flatter people in power, or doesn't align with certain social goals.

"This is a matter of serious epistemic consequence," Lukianoff writes. "If knowledge is riddled with distortions or omissions as a result of political considerations, nobody will trust it."

"In theory, these laws prohibit AI systems from engaging in algorithmic discrimination—or from treating different groups unfairly," note Lukianoff and Adam Goldstein at National Review. "In effect, however, AI developers and deployers will now have to anticipate every conceivable disparate impact their systems might generate and scrub their tools accordingly. That could mean that developers will have to train their models to avoid uncomfortable truths and to ensure that their every answer sounds like it was created with HR and legal counsel looking over their shoulder, softening and obfuscating outputs to avoid anything potentially hurtful or actionable. In short, we will be (expensively) teaching machines to lie to us when the truth might be upsetting."

Whether it's trying to make ChatGPT a safe space for social justice goals or for Trump's ego, the fight to let government authorities define AI outputs could have disastrous results.

At the very least, it's going to lead to many years of fighting over AI bias, in the same way that we spent the past decade arguing over alleged biases in social media moderation and "censorship" by social media companies. And after all that, does anyone think that social media on average is any better—or that it is producing any better outcomes—in 2025 than it did a decade ago?

Politicizing online content moderation has arguably made things worse and, if nothing else, been so very tedious. It looks like we can expect a rehash of all these arguments where we just replace "social media" and "search engines" with "AI."


Abortion Law Violates First Amendment

Tennessee cannot ban people from "recruiting" abortion patients who are minors, a federal court says. The U.S. District Judge for the Middle District of Tennessee ruling comes as part of a challenge to the state's "abortion trafficking" law.

The recruitment bit "prohibits speech encouraging lawful abortion while allowing speech discouraging lawful abortion," per the court's opinion. "That is impermissible viewpoint discrimination, which the First Amendment rarely tolerates—and does not tolerate here."

While "Tennessee may criminalize speech recruiting a minor to procure an abortion in Tennessee," wrote U.S. District Judge Julia Gibbons. "The state may not, however, criminalize speech recruiting a minor to procure a legal abortion in another Tennessee."


Mississippi Can Make Social Media Companies Check IDs

Mississippi can start enforcing a requirement that social media platforms verify user ages and block anyone under age 18 who doesn't have parental consent to participate, the U.S. Court of Appeals for the 5th Circuit has ruled.

The tech industry trade group NetChoice has filed an emergency appeal to the U.S. Supreme Court to reinstate the preliminary injunction against enforcing the law.

"Just as the government can't force you to provide identification to read a newspaper, the same holds true when that news is available online," said Paul Taske, co-director of the NetChoice Litigation Center, in a statement. "Courts across the country agree with us: NetChoice has successfully blocked similar, unconstitutional laws in other states. We are confident the Supreme Court will agree, and we look forward to fighting to keep the internet safe and free from government censorship."


More Sex & Tech News

• Are men too "anxious about desire"? Is "heteropessimism" a useful concept? Should dating disappointment be seen as something deeper, or is that just a defense? In a much-talked-about new New York Times essay, Jean Garnett showcases the seductive appeal of blaming one's personal romantic woes on some sort of larger, more political forces and the perils of this approach.

• "House Speaker Mike Johnson is rebuffing pressure to act on the investigation into Jeffrey Epstein, instead sending members home early for a month-long break from Washington after the week's legislative agenda was upended by Republican members who are clamoring for a vote," the Associated Press reports.

• X can tell users when the government wants their data. "X Corp. convinced the DC Circuit to vacate a broad order forbidding disclosure about law enforcement's subpoenas for social media account information, with the panel finding Friday a lower court failed to sufficiently review the potential harm of immediate disclosure," notes Bloomberg Law.

• A new, bipartisan bill would allow people to sue tech companies over training artificial intelligence systems using their copyrighted works without express consent.

• A case involving Meta and menstrual app data got underway this week. In a class-action suit, the plaintiffs say that period tracking app Flo shared their data with Meta and other companies without their permission. "Brenda R. Sharton of Dechert, the lead attorney representing Flo Health, said evidence will show that Flo never shared plaintiffs' health information, that plaintiffs agreed that Flo could share data to maintain and improve the app's performance, and that Flo never sold data or allowed anybody to use health information for ads," per Courthouse News Service.

• YouTube is now Netflix's biggest competitor. "The rivalry signals how the streaming wars have entered a new phase," television reporter John Koblin suggests. "Their strategies for success are very different, but, in ways large and small, it's becoming clear that they are now competing head-on."

• European Union publishers are suing Google over an alleged antitrust law violation. Their beef is with Google's AI overviews, which they're worried could cause "irreparable harm." They complain to the European Commission that "publishers using Google Search do not have the option to opt out from their material being ingested for Google's AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google's general search results page."


Today's Image

Conservative Political Action Conference (CPAC) | National Harbor, Maryland | 2014 (ENB/Reason)

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: So You Wanted Flying Cars?

Elizabeth Nolan Brown is a senior editor at Reason.

Artificial IntelligenceFree SpeechCensorshipFree MarketsTechnologyInternetPoliticsPolitical CorrectnessDiscriminationTrump Administration
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Show Comments (24)

Latest

Marjorie Taylor Greene Is Dead Wrong About Outlawing Climate Geoengineering

Ronald Bailey | 7.23.2025 6:00 PM

Virginia Is Using AI To Identify Illegal and Redundant Regulations

Jack Nicastro | 7.23.2025 4:48 PM

Trump Administration Opens New Investigation Into Harvard, Escalating Pressure

Autumn Billings | 7.23.2025 4:10 PM

Police Officer Threatens To Run Over Protester for Filming on the Sidewalk

Joe Lancaster | 7.23.2025 3:55 PM

Can Wall Street Survive a Socialist Mayor?

Jared Dillian | 7.23.2025 1:50 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!