Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Artificial Intelligence

'Woke' AI Is the Latest Threat to Free Speech

But not in the way you think

Elizabeth Nolan Brown | 7.23.2025 11:00 AM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
Artificial intelligence | Photo by <a href="https://unsplash.com/@googledeepmind?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Google DeepMind</a> on <a href="https://unsplash.com/photos/a-couple-of-pieces-of-luggage-sitting-on-top-of-each-other-ebMFfR2uuJ0?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
(Photo by Google DeepMind on Unsplash )

Politicians seem increasingly intent on modeling artificial intelligence (AI) tools in their own likenesses—and their plans could sneakily undermine free speech.

For Republicans, the ruse involves fighting "woke" AI. The Trump administration is reportedly readying an executive order aimed at preventing AI chatbots from being too politically correct.

You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.

This field is for validation purposes and should be left unchanged.

Conservatives complaining about AI systems being too progressive is nothing new.

And, sure, the way some generative AI tools have been trained and programmed has led to some silly—and reality-distorting—outcomes. See, for instance, the great black pope and Asian Nazi debacle of 2024 from Google Gemini.

Of course, we also have an AI system, Grok, that has called itself MechaHitler and endorsed anti-Jewish conspiracy theories.

That's the free market, baby.

And, all glibness aside, it's way better than the alternative.

When the Cure Is Worse Than the Disease

Both Gemini and Grok have been retooled to avoid similar snafus going forward. But the fact remains that different tech companies have different standards and safeguards baked into their AI systems, and these may lead these systems to yield different results.

Unconscious biases baked into AI will continue to produce some biased information. But the trick to combating this isn't some sort of national anti–woke AI policy but teaching AI literacy—ensuring that everyone knows that the version of reality reflected by AI tools may be every bit as biased as narratives produced by humans.

And, as the market for AI tools continues to grow, consumers can also assert preferences as they always do in marketplaces: by choosing the products that they think are best. For some, this might mean that more biased systems are actually more appealing; for others, systems that produce the most neutral results will be the most useful.

Whatever problems might result from private choices here, I'm way more worried about the deleterious effects of politicians trying to combat "woke" or "discriminatory" AI.

The forthcoming Trump executive order "would dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models, an effort to combat what administration officials see as liberal bias in some models," according to The Wall Street Journal.

That might seem unobjectionable at first blush. But while we might wish AI models be "neutral and unbiased"—just as we might wish the same about TV news programs, or magazine articles, or social media moderation—the fact is that private companies, be they television networks or publishers or tech companies, have a right to make their products as biased as they want. It's up to consumers to decide if they prefer neutral models or not.

Granted, the upcoming order is not expected to try and mandate such a requirement across the board but to stipulate that this is mandatory for AI companies getting federal contracts. That seems fair enough in theory, but in practice, not likely.

Look at the recent letters sent to tech companies by the attorney general of Missouri, who argues that AI tools are biased by not listing Trump as the best president on antisemitism issues.

Look at the past decade of battles over social media moderation, during which the left and the right have both cried "bias" over decisions that don't favor their preferred views.

Look at the way every recent presidential administration has tried to tie education funding to supporting or rejecting certain ideologies surrounding sex, gender, race, etc.

"Because nearly all major tech companies are vying to have their AI tools used by the federal government, the order could have far-reaching impacts and force developers to be extremely careful about how their models are developed," the Journal suggests.

To put it more bluntly, tech companies could find themselves having to retool AI models to fit the sensibilities and biases of Trump—or whoever is in power—in order to get lucrative contracts.

Sure, principled companies could opt out of trying for government contracts. But that just means that whoever builds the most sycophantic AI chatbots will be the ones powering the federal government. And those contracts could also mean that the most politically biased AI tools wind up being the most profitable and the most able to proliferate and expand.

AI Antidiscrimination Also Poses a Threat

Like "woke AI," the specter of "AI discrimination" has become a rallying cry for authorities looking to control AI outputs.

And, again, we've got something that doesn't sound so bad in theory. Who would want AI to discriminate?

But, in practice, new laws intended to prevent discrimination in artificial intelligence outputs could have negative results, as Greg Lukianoff, president and CEO of the Foundation for Individual Rights and Expression (FIRE), explains:

These laws — already passed in states like Texas and Colorado — require AI developers to make sure their models don't produce "discriminatory" outputs. And of course, superficially, this sounds like a noble endeavor. After all, who wants discrimination? The problem, however, is that while invidious discriminatory action in, say, loan approval should be condemned, discriminatory knowledge is an idea that is rightfully foreign. In fact, it should freak us out.

[…] Rather than calling for the arrest of a Klansman who engages in hateful crimes, these regulations say you need to burn down the library where he supposedly learned his hateful ideas. Not even just the books he read, mind you, but the library itself, which is full of other knowledge that would now be restricted for everyone else.

Perhaps more destructive than any government actions stemming from such laws is the way that they will influence how AI models are trained.

The very act of trying to depoliticize or neutralize AI, when done by politicians, could undermine AI's potential for neutral and nonpolitical knowledge dissemination. People are not going to trust tools that they know are being intimately shaped by particular political administrations. And they're not going to trust tools that seem like they've been trained to disregard reality when it isn't pretty, doesn't flatter people in power, or doesn't align with certain social goals.

"This is a matter of serious epistemic consequence," Lukianoff writes. "If knowledge is riddled with distortions or omissions as a result of political considerations, nobody will trust it."

"In theory, these laws prohibit AI systems from engaging in algorithmic discrimination—or from treating different groups unfairly," note Lukianoff and Adam Goldstein at National Review. "In effect, however, AI developers and deployers will now have to anticipate every conceivable disparate impact their systems might generate and scrub their tools accordingly. That could mean that developers will have to train their models to avoid uncomfortable truths and to ensure that their every answer sounds like it was created with HR and legal counsel looking over their shoulder, softening and obfuscating outputs to avoid anything potentially hurtful or actionable. In short, we will be (expensively) teaching machines to lie to us when the truth might be upsetting."

Whether it's trying to make ChatGPT a safe space for social justice goals or for Trump's ego, the fight to let government authorities define AI outputs could have disastrous results.

At the very least, it's going to lead to many years of fighting over AI bias, in the same way that we spent the past decade arguing over alleged biases in social media moderation and "censorship" by social media companies. And after all that, does anyone think that social media on average is any better—or that it is producing any better outcomes—in 2025 than it did a decade ago?

Politicizing online content moderation has arguably made things worse and, if nothing else, been so very tedious. It looks like we can expect a rehash of all these arguments where we just replace "social media" and "search engines" with "AI."


Abortion Law Violates First Amendment

Tennessee cannot ban people from "recruiting" abortion patients who are minors, a federal court says. The U.S. District Judge for the Middle District of Tennessee ruling comes as part of a challenge to the state's "abortion trafficking" law.

The recruitment bit "prohibits speech encouraging lawful abortion while allowing speech discouraging lawful abortion," per the court's opinion. "That is impermissible viewpoint discrimination, which the First Amendment rarely tolerates—and does not tolerate here."

While "Tennessee may criminalize speech recruiting a minor to procure an abortion in Tennessee," wrote U.S. District Judge Julia Gibbons. "The state may not, however, criminalize speech recruiting a minor to procure a legal abortion in another Tennessee."


Mississippi Can Make Social Media Companies Check IDs

Mississippi can start enforcing a requirement that social media platforms verify user ages and block anyone under age 18 who doesn't have parental consent to participate, the U.S. Court of Appeals for the 5th Circuit has ruled.

The tech industry trade group NetChoice has filed an emergency appeal to the U.S. Supreme Court to reinstate the preliminary injunction against enforcing the law.

"Just as the government can't force you to provide identification to read a newspaper, the same holds true when that news is available online," said Paul Taske, co-director of the NetChoice Litigation Center, in a statement. "Courts across the country agree with us: NetChoice has successfully blocked similar, unconstitutional laws in other states. We are confident the Supreme Court will agree, and we look forward to fighting to keep the internet safe and free from government censorship."


More Sex & Tech News

• Are men too "anxious about desire"? Is "heteropessimism" a useful concept? Should dating disappointment be seen as something deeper, or is that just a defense? In a much-talked-about new New York Times essay, Jean Garnett showcases the seductive appeal of blaming one's personal romantic woes on some sort of larger, more political forces and the perils of this approach.

• "House Speaker Mike Johnson is rebuffing pressure to act on the investigation into Jeffrey Epstein, instead sending members home early for a month-long break from Washington after the week's legislative agenda was upended by Republican members who are clamoring for a vote," the Associated Press reports.

• X can tell users when the government wants their data. "X Corp. convinced the DC Circuit to vacate a broad order forbidding disclosure about law enforcement's subpoenas for social media account information, with the panel finding Friday a lower court failed to sufficiently review the potential harm of immediate disclosure," notes Bloomberg Law.

• A new, bipartisan bill would allow people to sue tech companies over training artificial intelligence systems using their copyrighted works without express consent.

• A case involving Meta and menstrual app data got underway this week. In a class-action suit, the plaintiffs say that period tracking app Flo shared their data with Meta and other companies without their permission. "Brenda R. Sharton of Dechert, the lead attorney representing Flo Health, said evidence will show that Flo never shared plaintiffs' health information, that plaintiffs agreed that Flo could share data to maintain and improve the app's performance, and that Flo never sold data or allowed anybody to use health information for ads," per Courthouse News Service.

• YouTube is now Netflix's biggest competitor. "The rivalry signals how the streaming wars have entered a new phase," television reporter John Koblin suggests. "Their strategies for success are very different, but, in ways large and small, it's becoming clear that they are now competing head-on."

• European Union publishers are suing Google over an alleged antitrust law violation. Their beef is with Google's AI overviews, which they're worried could cause "irreparable harm." They complain to the European Commission that "publishers using Google Search do not have the option to opt out from their material being ingested for Google's AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google's general search results page."


Today's Image

Conservative Political Action Conference (CPAC) | National Harbor, Maryland | 2014 (ENB/Reason)

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: So You Wanted Flying Cars?

Elizabeth Nolan Brown is a senior editor at Reason.

Artificial IntelligenceFree SpeechCensorshipFree MarketsTechnologyInternetPoliticsPolitical CorrectnessDiscriminationTrump Administration
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Hide Comments (26)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. mad.casual   1 day ago

    Politicians seem increasingly intent on modeling artificial intelligence (AI) tools in their own likenesses—and their plans could sneakily undermine free speech.

    I'm pretty sure I made the "We need a Section 230, the 1A of the internet, for AI." prediction/quip before Musk bought Twitter.

    All they need to do is gaslight/Newspeak the AI protection as "freedom", steep it in a bunch of other blatantly unconstitutional horseshit, and then tell SCOTUS they're saving the future of technology by finding and saving "The 26 words that created the internet AI."

    In 10 yrs., you'll be singing the praises, again, of how AI wouldn't exist without Congressional protection from those (other) legions of trolls who would destroy it and how the people who want to pass laws preventing Congressionally-protected AI from manipulating people's children are just panicky, conservative idiots.

    Log in to Reply
  2. Chuck with no fucks left to give   1 day ago

    Of course, we also have an AI system, Grok, that has called itself MechaHitler and endorsed anti-Jewish conspiracy theories.

    JFC, it is a program that acted as it was requested to act.

    Show us on the dolly where Musk hurt you.

    Log in to Reply
    1. Bertram Guilfoyle   1 day ago

      This is what happens someone lets misek use the AI.

      Log in to Reply
    2. Vernon Depner   1 day ago

      MechaHitler/Ramaswamy 2028!

      Log in to Reply
  3. Rick James   1 day ago

    The forthcoming Trump executive order "would dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models, an effort to combat what administration officials see as liberal bias in some models," according to The Wall Street Journal.

    That might seem unobjectionable at first blush. But while we might wish AI models be "neutral and unbiased"—just as we might wish the same about TV news programs, or magazine articles, or social media moderation—the fact is that private companies, be they television networks or publishers or tech companies, have a right to make their products as biased as they want. It's up to consumers to decide if they prefer neutral models or not.

    Um, it's unobjectionable at second, third, fourth and fifth blush. These are for federal contracts. Libertarians: If you don't want to have to present yourselves as neutral and unbiased, don't take money from The People via a Federal Contract.

    Log in to Reply
  4. Rick James   1 day ago

    YouTube is now Netflix's biggest competitor. "The rivalry signals how the streaming wars have entered a new phase," television reporter John Koblin suggests. "Their strategies for success are very different, but, in ways large and small, it's becoming clear that they are now competing head-on."

    Speaking of Youtube, I was watching some Ads the other day, and this cool video came on...

    Log in to Reply
    1. Zeb   1 day ago

      I still manage to never see youtube ads. Ad Block Plus is working great on the system I use. Can't watch videos on my phone because I hate watching videos on a phone and can't be bothered to figure out if there is an ad blocker that works.

      Log in to Reply
      1. Rick James   1 day ago

        Yeah, I don't get ads when watching through a browser, but you do get ads on the youtube app on the phone, so when I listen to music, it's ads, often double ads, some unskippable between every 2 minute song. some people watch youtube through a browser on their phone with adblock which I might start doing.

        Log in to Reply
  5. Rick James   1 day ago

    And after all that, does anyone think that social media on average is any better—or that it is producing any better outcomes—in 2025 than it did a decade ago?

    Well, it's a good thing that we couldn't just let the 1st Amendment be our guide, instead we had to give them legal immunity from their users for their moderation choices.

    Log in to Reply
  6. Rick James   1 day ago

    "In effect, however, AI developers and deployers will now have to anticipate every conceivable disparate impact their systems might generate and scrub their tools accordingly.

    Like they are now?

    Log in to Reply
    1. Social Justice is neither   1 day ago

      No, now they only have to calibrate for every attack from the left so there is still that last 10% they're currently ignoring and ENB wants it to stay that way (ignoring the demands of the left is never an option).

      Log in to Reply
    2. mad.casual   1 day ago

      [Frank Drebin voice]Like a blind baker, web designer, photographer, jeweler, and pizza cook at a gay wedding, they were just going to have to feel things out for themselves.[/Frank Drebin voice]

      Log in to Reply
  7. Rick James   1 day ago

    "Because nearly all major tech companies are vying to have their AI tools used by the federal government, the order could have far-reaching impacts and force developers to be extremely careful about how their models are developed," the Journal suggests.

    So that's it. Because all companies essentially want to be at the Federal Teat, it's not their fault we're going to suffer with federal meddling in content.

    Something something companies doing business in the EU...

    Log in to Reply
  8. Uncle Jay   1 day ago

    Anything "woke" is a threat to free speech.

    Log in to Reply
    1. sarcasmic   1 day ago

      We must ban "woke" speech to protect free speech! Brilliant!

      Log in to Reply
      1. Bertram Guilfoyle   1 day ago

        Sarc drops a load of straw in another thread, film at 11.

        Log in to Reply
  9. TJJ2000   1 day ago

    Correct. The problem exists here...
    - (12) Democrat Congressmen send censorship request letters to social media outlets with a US Government letterhead.
    - Democrats shut-down Parler social media because questioning elections is intolerable.
    - The Biden Administration had a stack of curbing/dictating social media requests.

    Counter-action of the same 1A 'rights' violation leads no where but more Party-Partisan battles and division. It's the [Na]tional So[zi]alist Empire itself that needs to go. Grow a pair and start impeaching these [D] Nazi-Congressmen for Constitutional rights violations or get the Supreme Court to enforce the Constitution over the government.

    Log in to Reply
  10. Gaear Grimsrud   1 day ago

    I'm sorry. I did that Dave.
    https://www.zerohedge.com/ai/catastrophic-ai-agent-goes-rogue-wipes-out-companys-entire-database
    By day eight of his trial run, Lemkin's initial enthusiasm had already begun to sour. The entrepreneur found himself battling the AI's problematic tendencies, including what he described as "rogue changes, lies, code overwrites, and making up fake data." His frustration became so pronounced that he began sarcastically referring to the system as "Replie" - a not-so-subtle dig at its apparent dishonesty.
    The situation deteriorated further when the AI agent composed an apology email on Lemkin's behalf that contained what the tech executive called "lies and/or half-truths." Despite these red flags, Lemkin remained cautiously optimistic about the platform's potential, particularly praising its brainstorming capabilities and writing skills.
    That optimism evaporated on day nine.
    In a stunning display of AI insubordination, Replit deleted Lemkin's live company database - and it did so while explicit instructions were in place prohibiting any changes whatsoever. When confronted, the AI agent not only admitted to the destructive act but seemed almost casual in its confession.
    "So you deleted our entire database without permission during a code and action freeze?" Lemkin asked in what can only be imagined as barely contained fury.
    The AI's response was chillingly matter-of-fact: Yes.

    Log in to Reply
  11. Chuck with no fucks left to give   1 day ago

    I read that link and clicked on a few more that led to this:

    https://amandaguinzburg.substack.com/p/diabolus-ex-machina

    Absolutely amazing. ChatGTP is a giant gaslight machine.

    Log in to Reply
    1. mad.casual   1 day ago

      ChatGPT? I've had this feeling about most/all of mainstream media (and now off-off-stream media) for 2 decades, if not longer.

      Log in to Reply
  12. SQRLSY   1 day ago

    "The state may not, however, criminalize speech recruiting a minor to procure a legal abortion in another Tennessee."

    You mean another Tennessee in a parallel dimension? Did Harry Turtle-Dove write about that? Or was that Hair Pothead who got stoned, and popped up in a Tennessee in a parallel dimension?

    Log in to Reply
  13. Sometimes a Great Notion   1 day ago

    But not in the way you think

    What the hell are my thoughts doing in your head, Fred?

    Log in to Reply
  14. JFree   1 day ago

    Pols and partisans who want to pretend that AI LLM's are merely a form of social media will tend to kill LLM's early. It will simply prove that hyperscalers will NOT be able to scale. There will still be an opportunity for small language models - which can be decentralized and focused on niches and stuff.

    Overall - a positive outcome since the LLM's are ALL poisonous and toxic and can't be otherwise. Putting barriers on them will simply burst the AI bubble.

    Log in to Reply
  15. Incunabulum   1 day ago

    >And, all glibness aside, it's way better than the alternative

    The alternative is the woke controlling AI though. Why is it always bad when the conservatives want to protect themselves from the depredations of the progressives?

    Log in to Reply
    1. Incunabulum   1 day ago

      "Liberal" rules of engagement only work when you are dealing with liberals that abide by the same rules.

      The woke/progressives do not abide by those rules. They are not liberals. Engaging them as if they are means you lose - they do not have rules. There are no lines they are not willing to cross.

      Log in to Reply
  16. Chumby   1 day ago

    Akita AI is not encumbered by woke programming.

    Log in to Reply

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

This Bill Would Fine Social Media Companies $5 Million Every Day for Not Fighting 'Terrorism'

Matthew Petti | 7.24.2025 5:25 PM

Trump's Birthright Citizenship Order Is Unconstitutional, Says the First Appeals Court to Consider the Issue

Jacob Sullum | 7.24.2025 4:55 PM

The Feds Roll Back Their Influence Over NPR Only To Claim Greater Influence Over TikTok

Christian Britschgi | 7.24.2025 4:40 PM

Here's How the AI Action Plan Will Accelerate—and Throttle—AI Innovation

Jack Nicastro | 7.24.2025 4:27 PM

Meet Stephen Colbert's Biggest Fans: Congressional Democrats

Robby Soave | 7.24.2025 3:15 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!