Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
    • Reason TV
    • The Reason Roundtable
    • Free Media
    • The Reason Interview
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • Freed Up
    • The Soho Forum Debates
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Log In

Create new account

Artificial Intelligence

How a Bill Banning AI Companions for Kids Could Usher in Widespread ID Checks Online

Plus: Supreme Court pauses ban on mail-order abortion pills, TikTok's artistic merit, a defense of pickup artists, and more...

Elizabeth Nolan Brown | 5.4.2026 11:51 AM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
A child on a smart phone and an image of Senator Josh Hawley talking into a microphone | Illustration: Lex Villena; Gage Skidmore
(Illustration: Lex Villena; Gage Skidmore)

Sen. Josh Hawley's Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act advanced out of the Senate Judiciary committee last week. "A Trojan horse for universal online ID checks," is how Jibran Ludwig of Fight for the Future described it.

The bill would require anyone using an AI chatbot to provide proof of identity and ban minors from interacting with many sorts of AI chatbots entirely.

Unlike some social media age verification bills, it would give parents no right to opt out of the rules the federal government sets on their kids' technology use.

You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.

This field is for validation purposes and should be left unchanged.

The GUARD Act is co-sponsored by Sen. Richard Blumenthal (D–Conn.), who—like Hawley—has long been a champ at moral panic around technology. (Cue: Bipartisan is just another word for really bad idea…)

And while some on the Senate Judiciary Committee expressed concerns about privacy or how this could actually backfire and harm minors, those senators still voted to advance the bill. It "easily passed in committee," notes The Hill, despite some senators' reservations:

Sen. Alex Padilla (D-Calif.), who voted yes, said there are concerns about "potential privacy and security risks" with the age-verification component, suggesting it may need to be "fine-tuned."

Sen. Ted Cruz (R-Texas), who supported various kids online safety bills, said he would vote yes but noted the bill needs "some revisions."

Cruz was concerned the bill would completely ban all AI chatbots for minors, noting their potential benefits. Hawley clarified the bill does not ban all AI chatbots for minors, but rather it "prevents AI chatbots that engage with minors from pushing sexually explicit material to the minor," or encouraging self-harm or suicide.

That seems like some incredibly disingenuous framing from Hawley. While the bill does ban what he says it does, it would also do a whole lot more. Such as:

1) Ban Kids From Using Friendly AI 

The GUARD Act defines AI companion as any AI system that "provides adaptive, human-like responses to user inputs; and is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication." AI companies would be required to prohibit anyone under age 18 "from accessing or using any AI companion," the GUARD Act says.

That obviously goes way beyond stopping teens from chatting with robots about sex or violence. It takes away their right to talk to AI companions about any topic. And it defines AI companion so broadly that it would encompass any AI chatbot that affects a friendly or familiar tone.

Even at its least broad, such a ban would be a bad idea. Some teens might benefit from AI therapy tools. And there are all sorts of not-bad reasons why a teenager might want to engage with an AI chatbot capable of providing supportive or friendly communication.

In the broad form in which it's written, the bill could stop young people from engaging chatbots in all sorts of neutral or even positive interactions, including using "online tutors, practicing a foreign language, or developing an array of skills," as Jennifer Huddleston and Juan Londoño point out at the Cato Institute's blog:

AI tools have also become ubiquitous in many products, doing everything from providing tech support to helping customize a burrito (and perhaps being able to write code in the process). A February 2026 survey by the Pew Research Center found that over half of US teens use chatbots for help with schoolwork. The GUARD Act would prevent those under 18 from accessing any of these products.

2) Take Away Parental Rights

Parents would have no ability to opt their kids out of this ban. The GUARD Act takes the choice about when and how to introduce young people to certain AI technologies out of families' hands and into the hands of the state.

"Restricting parental choice in this manner is indicative of a failure to consider both the unique values of every family and the potential for AI chatbots to improve the lives of many young people, including those with disabilities like autism," note Huddleston and Londoño. "Different families may have different views on when a child should or shouldn't access any technology. This decision appropriately belongs with parents, not policymakers."

3) Invade Everyone's Privacy

Besides its negative implications for minors, the GUARD Act would be a big blow to privacy, since implementing it would require some sort of identity verification from all AI chatbot users.

The GUARD Act says that any provider of an AI chatbot must require all users to create an account, and that creating an account requires age verification.

"By mandating government ID or equivalent age verification for any American who wishes to interact with an AI chatbot, the bill burdens the speech and associational rights of every adult, not just minors," Ashkhen Kazaryan of The Future for Free Speech told The Hill.

Because AI tools and chatbots are becoming ubiquitous across all types of digital platforms, the GUARD Act's age verification scheme could wind up much broader than it might initially appear.

It could wrap up "every social media platform and the website of any company operating AI customer service chatbots," the digital rights group Fight for the Future points out:

But it doesn't end there: any person who "makes available an artificial intelligence chatbot" is covered by the law. This would require everyone from internet service providers to anyone who runs a blog with a comment section to administer online ID checks. While apparently narrow, this bill is in fact an online ID check mandate unmatched in scope and highly invasive in methods.

For better and for worse, AI chatbots are threatening to overtake search engines as the primary way people find information online. This means that the millions of people who use these tools for everyday tasks will now be providing sensitive and private information to a sketchy, insecure age verification service, which have already resulted in thousands of people's private information being leaked. Government censorship is not confined to outright prohibition of speech: burdens like this are a legally dubious limit on free expression.

4) Chill (and Compel) Speech

The way this measure is written, it could seriously restrict what AI chatbots are allowed to say while simultaneously compelling them to speak the government's messages.

At the start of every conversation and at 30-minute intervals thereafter, AI chatbots would have to tell users that they are not human. At the start of every conservation, they would also have to "clearly and conspicuously disclose to the user that the chatbot does not provide medical, legal, financial, or psychological services; and users of the chatbot should consult a licensed professional for such advice."

Meanwhile, chatbots would be banned from "represent[ing], directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional."

The "indirectly" bit there raises alarms. Authorities might argue simply providing authoritative advice counts as an indirect representation of professional authority.

Then there's the ban on AI chatbots engaging minors in discussions about sexuality. It's written broadly—this isn't just about stopping minors from viewing pornography. The GUARD Act would make it "unlawful to design, develop, or make available an artificial intelligence chatbot, knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct."

This could ban AI chatbots—even those that are strictly non-companion-like—from talking with minors about safe sex practices, contraception, sexual orientation, and more, since doing so might "pose a risk" of getting the minors to talk about intercourse, oral sex, masturbation, or anything related.

It could also chill AI and user speech around topics related to sex and sexuality generally, not just when minors are concerned.

Notice that an AI chatbot provider need not intentionally or affirmatively engage a minor in sex talk; it must only act in "disregard" of the "risk" that it could do so. The penalty is $100,000 per offense. For tech companies looking to avoid liability, that makes a strong case for limiting such discussions more generally, training their bots to shut down any conversations related to sex.

"Like age-verification proposals for online services, this bill is unlikely to survive constitutional scrutiny," write Huddleston and Londoño. "But beyond its likely unconstitutionality, Sen. Hawley's approach endangers users' privacy, limits parental rights, and locks minors out of beneficial uses of AI."


IN THE NEWS

Supreme Court pauses ban on mail-order abortion pills. On Friday, the 5th U.S. Circuit Court of Appeals said it's illegal to mail abortion pills. Today, the Supreme Court put a one-week pause on that decision.

In a unanimous ruling last week, 5th Circuit judges said the abortion-inducing drug mifepristone can be handed out only in person, contrary to the U.S. Food and Drug Administration's current prescribing rules. "Friday's ruling…affects all states, even those without abortion restrictions," noted PBS. "There is little precedent for a federal court overruling the scientific regulations of the FDA, and it remains to be seen how the decision could impact how the drug is dispensed long-term."

On Saturday, two mifepristone manufacturers asking the Supreme Court to stay the 5th Circuit's ruling. From SCOTUSblog:

The companies, Danco Laboratories and GenBioPro, both told the justices that the 5th Circuit's order was "unprecedented." Danco argued that the order "injects immediate confusion and upheaval into highly time-sensitive medical decisions," while GenBioPro said that the order "has unleashed regulatory chaos."…

Today, the Supreme Court did indeed pause the 5th Circuit's ruling. So mail-order mifepristone is still legal, for now.

"The order signed by Justice Samuel Alito temporarily allows women seeking abortions to obtain the pill at pharmacies or through the mail, without an in-person visit to a doctor," reports the Associated Press. "Alito's order will remain in effect for another week while both sides respond and the court more fully considers the issue."


ON SUBSTACK

Gen Z marriage myths: Halina Bennet at Slow Boring pushes back against the idea that marriage is dead among the college-educated ranks of Gen Z:

The data tells us that college-educated women are still marrying—they are just marrying later. It's actually the numbers among non-college-educated women that are falling. Many of the people who will eventually marry simply haven't reached the average age of marriage among college-educated women.

Bennet conducted her own survey of people ages 18 to 35, and "the portrait that emerged was of a generation that has decided that marriage is worth having, and that value makes it worth waiting for."

But Bennet does see a tendency to place too much stock in doubt. "This is a generation trained to wait for complete information—for the best option, the right moment, the optimal conditions," she writes. "Marriage resists this habit because the data is never complete and the conditions never fully stabilize."


READ THIS THREAD

Point:

Frankly, they're not just replacing reading with vertical video slop. They're replacing EVERYTHING with vertical video. Streaming, gaming, IRL interactions, all of it is being swallowed by vertical shortform video. https://t.co/synEcWAw4q pic.twitter.com/XrUQXQNcDh

— Jeremiah Johnson 🌐 (@JeremiahDJohns) April 27, 2026

And counterpoint?

"There are about 70% more bookstores now than there were six years ago in the United States. After 20 years of declining numbers, they're coming roaring back."

"Since 2020,… American Booksellers Association membership has grown from 1,900 to 3,200."https://t.co/XTUijwVQQn

— 𝙲𝚑𝚊𝚛𝚕𝚎𝚜 𝙲. 𝙼𝚊𝚗𝚗 (@CharlesCMann) April 30, 2026

Relatedly: "Is TikTok art?" asks The Argument.

"Kids don't just spend time on social media because they are screen junkies who can't read. That would be too easy," Maibritt Henkel writes.

They spend time on social media, in large part, because social media has become brilliantly, absurdly, unprecedentedly, entertaining.

Even if you wish it weren't, vertical 30-second video is the creative medium of our time. Taking seriously the merits of any new formal paradigm is in the spirit of how we have met every technological rupture in art history.


More Sex & Tech

• Magdalene J. Taylor defends the pickup artist. "While some might argue that pickup artistry faded due to its misogynistic attitudes, the reality is that the manosphere it grew from is more misogynistic than ever," she writes. "Trying to get laid has been traded for a culture of hustle and grift that views women as a waste of time. At the very least, the pickup artist didn't think so."

• "The Trump administration's intrusive social media rules are a gift to tyrants," writes Julian Sanchez. Pointing to the U.S. Embassy in Thailand posting that all visa applicants must set their social media profiles to public, he notes that "demanding prospective visitors and immigrants set their social media profiles public isn't just an intrusive policy in service of a constitutionally dubious scheme to exclude people with disfavored political opinions: It is likely to put applicants, their friends, and their families in very real, physical danger" in countries like Thailand, where it's still criminal to insult the royal family.

• Oklahoma lawmakers passed a bill to criminalize providing a woman with abortion pills. "Supporters argued it was necessary to save the unborn and reduce abortions forced on women by sex traffickers," reports the Oklahoma Voice. (That last bit is a stellar example of how supporters of abortion bans try to negate women's agency in the abortion debate so that their bans can be framed as for women's own good.)

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: Project Freedom

Elizabeth Nolan Brown is a senior editor at Reason.

Artificial IntelligencePrivacyTechnologyCongressJosh HawleyMoral PanicChildrenTeenagersParental RightsLegislationSenate
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests

Hide Comments (16)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. Rick James   5 hours ago

    This means that the millions of people who use these tools for everyday tasks will now be providing sensitive and private information to a sketchy, insecure age verification service, which have already resulted in thousands of people's private information being leaked.

    There isn't enough comment space to explain to you smooth-brained "section 230 is the first amendment of the internet" types as to how retarded this take is-- not from a "I think there SHOULD be age verification" standpoint, but from a "you have no idea how big a security risk AI is for people" take.

    THE biggest fear in IT security right now is AI. And it's largely because the English Major sitting in HR who says, "Hey, Claude, summarize all my documents!" doesn't realize that all her documents just went to the Cloud someone else's computer who now has a permanent copy of them and used them to train their AI bot yet further and will probably in some way or another be searchable by other people.

    Log in to Reply
    1. Rick James   5 hours ago

      Unfortunately, I can't find the video, but a couple of years ago, a guy who ran a VPN company pointed out how end-to-end encryption was likely going to be useless-- not because AI would break your encryption, but because Apple and Microsoft were literally bragging to a packed house that their phone (aka 'the cloud' aka 'someone else's computer' aka "apple and microsoft") would "See what you see, hear what you hear, read what you read". You don't need to crack encryption when everything at the decrypted end point is just uploaded to Apple's AI and summarized for you.

      Log in to Reply
      1. Rick James   5 hours ago

        Found it.

        Log in to Reply
    2. AJinNJ   5 hours ago

      "someone else's computer who now has a permanent copy of them and used them to train their AI bot yet further and will probably in some way or another be searchable by other people."

      Which is no different from Facebook, or Twitter, or TikTok, or E-Mail, or SMS Texts, or fill in the blank with any online service.

      https://youtu.be/iakgpq4p9yM

      Log in to Reply
      1. Rick James   5 hours ago

        It is different, because at least when you uploaded something to facebook, you could at least stop and think, "I'm sending this to facebook for other people to look at". But when you're sitting at your computer with a bunch of locally stored excel spreadsheets and you say, "Hey Co-Pilot, can you produce a summary of all these" most normie users don't think that all your documents just went to the internet, were searched, analyzed and used for training data. Most people think the AI is just running on their computer and is a magical local agent that does whatever it does... locally.

        I have little sympathy for people who knowingly upload their stuff to the internet and are then shocked that someone looked at it. This is different because everything all the time will be sent to the AI cloud engine to continuously review everything you do. There's no psychological demarcation point.

        Log in to Reply
        1. AJinNJ   5 hours ago

          Well most people don't understand that Google and nearly all email providers scan their e-mails constantly and form a profile on them that they then sell to data brokers. There's no psychological demarcation point there either.... SMS Texts, samething. Most people assume that the only other person seeing what they're sending is the individual it is addressed to.

          Log in to Reply
          1. Rick James   4 hours ago

            Well most people don't understand that Google and nearly all email providers scan their e-mails constantly and form a profile on them that they then sell to data brokers. There's no psychological demarcation point there either.... SMS Texts, samething.

            But you're still giving examples which are all relegated to communications between entities. Email (goes out to the internet), SMS messages (goes out the cell network). I'm talking about a locally installed set of Word documents on your C: drive. Most normies think of that as purely locally stored data.

            I'm also referring to encrypting your communications between endpoints where even most normies realize that once it exits the encrypted tunnel and is (again) stored locally, they're reading or viewing or listening to on their screen is no longer encrypted but is 'safe' because it is now stored... locally. But now with modern continuous, always-on AI systems, the second the data is unencrypted and viewed by the user, it's immediately sent BACK out to parts unknown for 'summarization/identification'.

            Log in to Reply
            1. AJinNJ   4 hours ago

              If people made a minor mental shift and treated AI as a person, my references to E-Mail or SMS suffice. You'd go from thinking 'locally' to thinking you're sharing this with another "person". And if this really requires legislation, then just ban AI bots built directly into OSes... require the domain separation where stuff has to be manually selected for AI analysis. Ultimately, this seems like a company policy and training issue, not something that warrants government intervention.

              This is why I'm also completely against remote-based OS log-ins like MS did with Windows 11 (I managed to bypass that myself for my single instance of a Windows install. Not to mention this OS-based age verification thing is just a step closer to pay for OS monthly as a service - since they're supposedly (conveniently) willingly to offer credit card-based age verification; "You already gave us you're credit card, that'll be $10 a month to access your system").

              Regardless, this is why the education system needs a major revamp. Online safety and privacy should be taught in schools; including AI awareness. (I'm also an advocate for a complete overhaul of the structure; get rid of K-12 and replace with PreK-10.)

              Note: I'm a Linux user, I've had a fundamental hatred of Microsoft for nearly a decade. I do use an iPhone but have had Siri disabled since day one.

              Log in to Reply
              1. Social Justice is neither   2 hours ago

                It's not sharing with another person because that you could enforce some privacy and encryption, this is having another person over your shoulder 24/7 live streaming everything you do or see to somewhere and someone else.

                Log in to Reply
          2. Rick James   4 hours ago

            Let me give you a concrete example of what I'm talking about.

            I work for a billion dollar manufacturing company with offices all over the world. We have no-shit trade secrets and intellectual property. Corporate espionage, especially in China is a very real thing. Blanche in Finance and Don, head of engineering-- even though they are not tech people-- know very well that you don't send engineering documents through email and you don't send non-public financial information over the internet. She takes comfort in knowing that intranet communications are encrypted for documents going to corporate network shares and the like (my job). Site-to-site VPNs are all encrypted, all SD-WAN tunnels are secure and encrypted, all network shares are locked down to least-privilege-- Ie, Blanche knows that Frank from materials or Leslie from shipping and receiving can't get into her quarterly reports financials by browsing around network shares. Blanche is just smart enough that she knows not to send any of her files or folders to public cloud sites such as DropBox, Google Drive... and as corporate policy we block as many of those sites as we can and yes, it's an ongoing game of whack-a-mole, but that's why you pay a networking guy 6 figures to stay on top of it.

            One day, Blanche gets wind of this new thing called "Co-pilot" installed right in microsoft and it will summarize all her documents. So she decides to start playing with that not realizing that all of those documents that she was under the impression were absolutely stored local to the corporate network/infrastructure have just now been sent to the cloud, quietly and seamlessly.

            Suddenly, very... VERY secret financial documents not meant for public consumption just got sent somewhere into Microsofts Azure cloud. Microsoft may pinkie promise that they won't leak it, or they won't look at them, or they won't save them in persistent storage, but it doesn't matter, because Blanche didn't even know they went there.

            Log in to Reply
            1. mad.casual   3 hours ago

              Even if you don't upload documents; 'anonymize'.

              User A: Hey Claude, I need to perform this repetitive alpine linux update across all of our nodes to reduce server bloat and maintain uptime. Give me an ansible or other CI/CD framework to make it happen.
              Claude: [provides solution]
              User A: That worked great. I just need this tweak...
              [Meanwhile... on the opposite side of the planet]
              User X: Hey Claude, I need to reduce server bloat and maintain uptime
              Claude: Try this solution [provides "same" solution]. Let me know if you need any tweaks.

              Even if Co-pilot: I've got to organize tasks and emails for everyone on my team. Help me delegate tasks to all of them and get them oriented to Alice in Accounting and Larry in Legal for any questions or stumbling blocks that may come up.

              Log in to Reply
              1. mad.casual   2 hours ago

                I'm probably a little more "No bucks, no Buck Rogers." than your average infosec pogue and even I'm flabberghasted at the way companies threw everything out the window and just shrugged and said "OK, everybody in the company is using AI for everything from mundane emails to production infrastructure from now on.

                Very reminiscent of how we all forgot about 100 yrs. of virology and epidemiology for two weeks.

                Log in to Reply
  2. Agammamon   4 hours ago

    >"How a Bill Banning AI Companions for Kids Could Usher in Widespread ID Checks Online

    Gotta protect your right to goon!

    Log in to Reply
  3. damikesc   3 hours ago

    My level of "give a shit" about online companies being inconvenienced is sub-zero.

    Log in to Reply
  4. SCOTUS gave JeffSarc a big sad   3 hours ago

    ENB is concerned there could be more barriers to grooming.

    Log in to Reply
  5. Neutral not Neutered   2 hours ago

    Plenty of things require ID before you can do them...

    "By mandating government ID or equivalent age verification for any American who wishes to interact with an AI chatbot, the bill burdens the speech and associational rights of every adult, not just minors,"

    Log in to Reply

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

California Says It Detected a Disease-Carrying Bug. So it Destroyed 32,000 Trees, 5 Miles Away.

Meagan O'Rourke | 5.4.2026 4:18 PM

Trump's New European Car Tariffs Demonstrate Why His 'Deals' Are Worthless

Eric Boehm | 5.4.2026 2:50 PM

Condemning Nicotine Pouches, Trump's Surgeon General Nominee Reveals Her Hostility to Harm Reduction

Jacob Sullum | 5.4.2026 2:20 PM

How a Bill Banning AI Companions for Kids Could Usher in Widespread ID Checks Online

Elizabeth Nolan Brown | 5.4.2026 11:51 AM

Project Freedom

Liz Wolfe | 5.4.2026 9:30 AM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS Add Reason to Google

© 2026 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

I WANT FREE MINDS AND FREE MARKETS!

Help Reason push back with more of the fact-based reporting we do best. Your support means more reporters, more investigations, and more coverage.

Make a donation today! No thanks
r

I WANT TO FUND FREE MINDS AND FREE MARKETS

Every dollar I give helps to fund more journalists, more videos, and more amazing stories that celebrate liberty.

Yes! I want to put my money where your mouth is! Not interested
r

SUPPORT HONEST JOURNALISM

So much of the media tries telling you what to think. Support journalism that helps you to think for yourself.

I’ll donate to Reason right now! No thanks
r

PUSH BACK

Push back against misleading media lies and bad ideas. Support Reason’s journalism today.

My donation today will help Reason push back! Not today
r

HELP KEEP MEDIA FREE & FEARLESS

Back journalism committed to transparency, independence, and intellectual honesty.

Yes, I’ll donate to Reason today! No thanks
r

STAND FOR FREE MINDS

Support journalism that challenges central planning, big government overreach, and creeping socialism.

Yes, I’ll support Reason today! No thanks
r

PUSH BACK AGAINST SOCIALIST IDEAS

Support journalism that exposes bad economics, failed policies, and threats to open markets.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BAD IDEAS WITH FACTS

Back independent media that examines the real-world consequences of socialist policies.

Yes, I’ll donate to Reason today! No thanks
r

BAD ECONOMIC IDEAS ARE EVERYWHERE. LET’S FIGHT BACK.

Support journalism that challenges government overreach with rational analysis and clear reasoning.

Yes, I’ll donate to Reason today! No thanks
r

JOIN THE FIGHT FOR FREEDOM

Support journalism that challenges centralized power and defends individual liberty.

Yes, I’ll donate to Reason today! No thanks
r

BACK JOURNALISM THAT PUSHES BACK AGAINST SOCIALISM

Your support helps expose the real-world costs of socialist policy proposals—and highlight better alternatives.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BACK AGAINST BAD ECONOMICS.

Donate today to fuel reporting that exposes the real costs of heavy-handed government.

Yes, I’ll donate to Reason today! No thanks