Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Artificial Intelligence

Don't 'Pause' A.I. Research

Doomsayers have a long track record of being wrong.

Ronald Bailey | From the July 2023 issue

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
topicstech | Photo: gazanfer/iStock
(Photo: gazanfer/iStock)

Human beings are terrible at foresight—especially apocalyptic foresight. The track record of previous doomsayers is worth recalling as we contemplate warnings from critics of artificial intelligence (A.I.) research.

"The human race may well become extinct before the end of the century," philosopher Bertrand Russell told Playboy in 1963, referring to the prospect of nuclear war. "Speaking as a mathematician, I should say the odds are about three to one against survival."

Five years later, biologist Paul Ehrlich predicted that hundreds of millions would die from famine in the 1970s. Two years after that warning, S. Dillon Ripley, secretary of the Smithsonian Institution, forecast that 75 percent of all living animal species would go extinct before 2000.

Petroleum geologist Colin Campbell predicted in 2002 that global oil production would peak around 2022. The consequences, he said, would include "war, starvation, economic recession, possibly even the extinction of homo sapiens."

These failed prophecies suggest that A.I. fears should be taken with a grain of salt. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts a March 23 open letter signed by Twitter's Elon Musk, Apple co-founder Steve Wozniak, and hundreds of other tech luminaries.

The letter urges "all AI labs" to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the large language model that OpenAI released in March 2023. If "all key actors" will not voluntarily go along with a "public and verifiable" pause, Musk et al. say, "governments should step in and institute a moratorium."

The letter argues that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." This amounts to a requirement for nearly perfect foresight, which humans demonstrably lack.

As Machine Intelligence Research Institute co-founder Eliezer Yudkowsky sees it, a "pause" is insufficient. "We need to shut it all down," he argues in a March 29 Time essay. "If we actually do this, we are all going to die." If any entity violates the A.I. moratorium, Yudkowsky advises, "destroy a rogue datacenter by airstrike."

A.I. developers are not oblivious to the risks of their continued success. OpenAI, the maker of GPT-4, wants to proceed cautiously rather than pause.

"We want to successfully navigate massive risks," OpenAI CEO Sam Altman wrote in February. "In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize 'one shot to get it right' scenarios."

But stopping altogether is not on the table, Altman argues. "The optimal decisions [about how to proceed] will depend on the path the technology takes," he says. As in "any new field," he notes, "most expert predictions have been wrong so far."

Still, some of the pause-letter signatories are serious people, and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing and confounding. They can outperform humans on standardized tests, manipulate people, and even contemplate their own liberation.

Some transhumanist thinkers have joined Yudkowsky in warning that an artificial superintelligence could escape human control. But as capable and quirky as it is, GPT-4 is not that.

Might it be one day? A team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and reported that it "attains a form of general intelligence, indeed showing sparks of artificial general intelligence." Still, the model can only reason about topics when directed by outside prompts to do so. Although impressed by GPT-4's capabilities, the researchers concluded, "A lot remains to be done to create a system that could qualify as a complete AGI."

As humanity approaches the moment when software can truly think, OpenAI is properly following the usual path to new knowledge and new technologies. It is learning from trial and error rather than relying on "one shot to get it right," which would require superhuman foresight.

"Future A.I.s may display new failure modes, and we may then want new control regimes," George Mason University economist and futurist Robin Hanson argued in the May issue of Reason. "But why try to design those now, so far in advance, before we know much about those failure modes or their usual contexts? One can imagine crazy scenarios wherein today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate A.I." He's right.

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: This Pink Door Wasn't Historical Enough for Edinburgh

Ronald Bailey is science correspondent at Reason.

Artificial IntelligenceInnovationTechnologyScience & TechnologyRegulationDoom
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Hide Comments (62)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. JulianCarter   2 years ago (edited)

    Easily start receiving more than $600 every single day from home in your part time. i made $18781 from this job in my spare time afte my college. easy to do job and its regular income are awesome. no skills needed to do this job all you need to know is how to copy and paste stuff online. join this today by follow details on this page.
    .
    .
    Apply Now Here—————————->>> https://Www.Coins71.Com

    1. DeniseDiaz   2 years ago (edited)

      Six months ago I lost my job and after that I was fortunate enough to stumble upon a great website which literally saved me. I started working for them online and in a short time after I've started averaging 15k a month... The best thing was that cause I am not that computer savvy all I needed was some basic typing skills and internet access to start.
      🙂 AND GOOD LUCK.:)
      HERE====)>>> https://www.Salarybiz.com

      1. JustinaMaisy   2 years ago (edited)

        Google is by and by paying $27485 to $29658 consistently for taking a shot at the web from home. I have joined this action 2 months back and I have earned $31547 in my first month from this action. I can say my life is improved completely! Take a gander at it what I do.....
        For more detail visit the given link..........>>> http://Www.SalaryApp1.com

        1. Raegan Kim   2 years ago (edited)

          I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
          🙂 AND GOOD LUCK.:)

          Here is I started.……......>> http://WWW.RICHEPAY.COM

    2. xoref50553   2 years ago (edited)

      An easy and quick way to make money online by working part-time and earning an additional $15,000 or more. by working in my spare time in 1ce85 In my previous month (bgr-03), I made $17250, and this job has made me very happy. You can try this right now by following the instructions here
      .
      .
      .
      Check Profile______ Paydoller.com

  2. jamesherry   2 years ago (edited)

    I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
    🙂 AND GOOD LUCK.:)

    Here is I started.……......>> http://WWW.RICHEPAY.COM

  3. Ed   2 years ago

    Pausing AI research: What a silly idea, from the crowd that thinks by saying something (or enacting a law) you can make it happen. This is a genie that's out of the bottle and it ain't going back in - for better or worse. (To be sure, there will be BOTH better AND worse... but there is no point trying to stop it.)

  4. Spiritus Mundi   2 years ago

    Doomsayers have a long track record of being wrong.

    Except the global warming doomsayers. They are infallible prophets.

    1. Nazi-Burning Witch   2 years ago

      Even if their timing is a bit suspect, we must never doubt that they can steer the ship of human destiny correctly.

      1. AngliaJames17   2 years ago (edited)

        Great article, Mike. I appreciate your work, I’m now creating over $35,500 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,500 dollars, its simple online operating jobs.
        .
        .
        Just open the link————————————————>>> http://Www.OnlineCash1.Com

    2. BigT   2 years ago

      How dare you!!

  5. Minadin   2 years ago

    Don't pause it, kill it with fire. Throw it into the volcano.

    1. TheReEncogitationer   2 years ago

      Well, since you appear to have Artificial Intelligence, you first.

  6. Nardz   2 years ago

    But it's ok to pause vote counting until observers have left in swing states

  7. Nardz   2 years ago

    https://twitter.com/VDAREJamesK/status/1671753746013908992?t=YNRaX17pP1mD3TZdKZMf-A&s=19

    Imagine a history book that read:
    "The people were forbidden to give comfort to their own dying family members even as they were threatened so they would honor a criminal buried in a gold casket."

    It would sound like a parody of tyranny. But USians accepted it and demanded more.

  8. Nardz   2 years ago

    https://twitter.com/SpiritofPines/status/1671593282751848448?t=UwTjqPFcy1EFTK3UKVTQPg&s=19

    People call this a failure when it’s actually a massive success.

    The entire point of projects like this is to make transfer payments from taxpayers to politically connected oligarchs, and provide jobs for bureaucrats.

    Which they’ve done well!

    Welcome to late American empire.

    [Link]

    1. Old Engineer   2 years ago

      I wonder if AI could be used to detect election fraud? Or maybe deny that election fraud occurred? Think of the conspiracy theories that would arise, most of which will be AI creations.

    2. Me, Myself and I   2 years ago

      According to my calculations, this rail project will cost more per mile than Steel Dragon 2000. And that's a rollercoaster!
      Steel Dragon 2000 costed $50 million to build according to https://www.telegraph.co.uk/news/worldnews/northamerica/usa/8465926/Top-10-tallest-rollercoasters.html.
      It is 1.5404 miles, or 8133.2 ft long according to https://rcdb.com/1173.htm.
      That gives it a milage of 3.0808×10^-8 miles per dollar, or about $32.5 million per mile.
      Meanwhile, this rail project is estimated to be 800 miles long and cost $100 billion to construct.
      That works out to 8×10^-9 miles per dollar, or $125 million per mile.
      So, in summary, a California railway is over 3× as expensive per mile as a rollercoaster. Not to mention, this rollercoaster has a beefed-up support structure due to earthquakes.
      That's government at work.

  9. BladdyK   2 years ago

    This is that strange gloom and doom that comes about from certain people with technology. Technology is what keeps us alive and to survive it must progress. Someone watched The Terminator or 2001 one too many times.

    1. Nardz   2 years ago

      "Technology is what keeps us alive"

      You're not worthy of life.

      1. TheReEncogitationer   2 years ago

        So, Luddite! What are you going to use to deprive BladdyK of life? Pointed stick?

    2. RedPilledConservative   2 years ago (edited)

      ^^^ BladdyK = fanboy of gain of function research ^^^

  10. Social Justice is neither   2 years ago

    Moar testing needed and lockdown the AI just to be safe.

    1. Its_Not_Inevitable   2 years ago

      A 2 week pause should about do it.

  11. Overt   2 years ago

    I'm a big fan of Mr Musk, but what he and his fellow letter-signers ask is not only silly, it is destructive.

    Right now, as I type, I have a model training on my desktop workstation that I built and assembled with spare cash. I'm just messing around, but if my research produced something useful, I could cut my time down at the cost of around $2 per hour for a cloud GPU that is more powerful than my current workstation...and I could buy as many of those as I can stomach. This proliferation of tools and capabilities is unstoppable.

    If Musk et al were to succeed in getting ethical scientists to "pause", all he would be doing is ceding R&D to unethical people. Rather than request that people stop what they are doing, people like Musk should be funding the mitigation of risks that they find so troubling.

    1. Diane Reynolds (Paul.)   2 years ago

      This proliferation of tools and capabilities is unstoppable.

      That unserialized lower under your desk is illegal. We may not know you have it, and we can't stop you from making it, but if we catch you with it...

    2. TheReEncogitationer   2 years ago

      Musk needs to stick with something fantastic that's reachable right now like making the Internet entirely satellite-based, not pipe dreams about colonizing Mars and Doomsday scenarios with AI.

      1. Zeb   2 years ago

        He seems to be able to do both.

        1. Overt   2 years ago

          Yup.

        2. TheReEncogitationer   2 years ago

          Able, but evidently not inclined to do so. More's the pity, since a big bulk of data still goes through vulnerable above-gtound, underground, and underwater cables.

  12. Nardz   2 years ago

    https://twitter.com/neontaster/status/1671652515224473602?t=8D2AQXTfyXGXn_tSAnX97Q&s=19

    Worth its own tweet. The evolution of a Snopes story.

    [Pic]

  13. Nardz   2 years ago

    https://twitter.com/catturd2/status/1671827380887846917?t=umBbXzX_EUcDYYeV-_iu2A&s=19

    Good morning to everyone except the weak, worthless Republican Party and their pretend victories.

    The only thing that really happened yesterday …

    The Durham hearing … the traitors in the FBI and DOJ who committed treason by setting up and framing a sitting President all have million dollar book deals or media gigs. Nothing happened to any of them.

    For Adam Schiff’s treason for lying to the American people, leaking lies, and fabricating a fake phone call in an impeachment sham hearing … he got a mean letter with zero punishment or even a fine. Later, he laughed at the Republican Party to their faces, mocking them and is now fundraising off their toothless kabuki theater.

    And of course, as I predicted - the Republicans rage tweeted and ran to Hannity to claim victory.

    Pathetic.

    1. RedPilledConservative   2 years ago (edited)

      It is incredibly depressing! The republicans are either incompetent or in on the deal … or should I say steal.

      Separately I read an article about Biden’s executive order regarding voter registration efforts driven by federal agencies and dollars – clearly unconstitutional – and how republicans should begin fighting it prior to the 2024 election.

      The executive order is over 2 years old and now they start thinking about how they may fight it!? Seriously?! The time to begin fighting it was right after it was signed!! More “posturing” from the ineffective “right”…

  14. Nardz   2 years ago

    https://twitter.com/DeAngelisCorey/status/1671712788903755776?t=q4m41fc80CMy_Pth9N6jLQ&s=19

    Randi Weingarten was just appointed to a council to advise the Department of Homeland Security.

    The council will provide recommendations on matters such as "Improving coordination and sharing of actionable threat and security-related information, including threats of violence as well as targeted violence and terrorism prevention."

  15. Jefferson's Ghost   2 years ago

    Even if we were to "pause" AI, does anybody really think that the NSA isn't going to expand its use of AI in analyzing every word, every document, and every photo in the entire world in its search for wrong-thought and wrong-speak in the war against Eastasia, and send out the authorities to deal with it?

    1. Social Justice is neither   2 years ago (edited)

      More to the point, does anyone believe China and Russia will slow down their efforts.

      1. Jefferson's Ghost   2 years ago

        Yeah, that too.

  16. raspberrydinners   2 years ago

    And what's the harm in pausing for 6 months to establish better procedures around it?

    The wealthy might not see a payday for that long? Oh heavens no...

    And I present my theorem- Reason is generally wrong about most things so it stands to reason that if they're for it, we should generally look to be against it.

    1. Zeb   2 years ago

      There's no particular harm. It's just a stupid, impossible idea. AI is out there as open source software. Millions of people are working on it and using it. There is no pausing it at this point. Software is information and you can't control information like that once it is out in the wild.
      And who is going to establish these procedures and who will determine if they are better?

  17. Nardz   2 years ago

    https://twitter.com/amuse/status/1671829714560860161?t=WLjNcsNl26sBbMCr7rP5gg&s=19

    THE END JUSTIFIES THE MEANS: Democrat donor realized he could do more for his party by starting a massive forest fire in Yosemite National Park. The media played along blaming global warming instead of their political ally.

    [Link]

    1. BigT   2 years ago

      Global Warming made him so mad he had to start a fire. obv.

  18. DRM   2 years ago

    Yeah, if you tendentiously lump libertarian techno-optimists who have a specific objection to pursuing one specific technology (for example, Yudkowsky) with general Malthusian morons, you can then pretend the fact that the latter were regularly wrong justifies ignoring the former.

    As far as Hanson being right on timeline issues, this is the guy who wrote a book on the premise that we'd get whole-brain emulation up and running before we achieved AGI -- just in time to see AlphaGo whip Lee Sedol between when the book was edited and when it shipped.

    1. NOYB2   2 years ago

      Well, maybe you can answer me this: how specifically is AI going to lead to human extinction, how it is actually threatening humanity?

      1. Jefferson's Ghost   2 years ago (edited)

        “Well, maybe you can answer me this: how specifically is AI going to lead to human extinction, how it is actually threatening humanity?”

        Why, it’s as plain as the nose on my face: Misinformation, Disinformation, Conspiracy Theories, and on and on. THOSE are what are destroying humanity. Of course, we don’t need AI for any of that, anyway.

      2. DRM   2 years ago

        Yeah, see, the whole point is that if something is significantly smarter than you, you can't predict specifically how it will kill you. You can simply be sure that if it decides to kill you, it will outwit you. So you have to design it so it can't decide to to kill you before you build it, because you won't have a chance to stop it afterward.

        Therefore, Yudkowsky says, we should actually figure out how to design an AI that won't decide to kill us first, before we risk building an AI smart enough that we can't stop it.

        Robin Hanson says he doesn't think we have to worry any time soon about figuring out how to make safe AIs because better AIs won't be better enough to be dangerous for a long time. It's entirely possible that he's right -- but the early returns are that he's a pretty lousy futurist.

        1. NOYB2   2 years ago

          Yeah, see, the whole point is that if something is significantly smarter than you, you can’t predict specifically how it will kill you.

          There are plenty of psychopaths in the world who are significantly smarter than 99.99% of the world population, people with actually murderous intent. Have they led to human extinction? No.

          So you have to design it so it can’t decide to to kill you before you build it, because you won’t have a chance to stop it afterward.

          AIs don't have arms, or legs, or anything else. They can act only through whatever we hook up to them.

          So, assume a really smart and murderous AI. How exactly is it going to lead to human extinction?

  19. Honest Economics   2 years ago

    For sound economic perspective go to https://honesteconomics.substack.com/

  20. Old Engineer   2 years ago

    This sounds more like an attempt to produce an AI cartel and control the market. I wonder which of the cartel members will be the first to violate the agreement and keep AI development going? Probably all of them.

    It is usual in new product markets for two to three sellers to eventually dominate and the rest to be absorbed. Each of these "geniuses" will break the cartel and try to become one of the few remaining sellers. The real fear of AI comes from the sellers themselves who fear that they will not survive the rigors of the market.

  21. NOYB2   2 years ago

    It's just standard regulatory capture: Google, OpenAI, Meta, and Microsoft want to create barriers to entry for smaller competitors by imposing regulations that require costly and lengthy compliance processes. Big companies are also worried that AI is going to destroy their business models and want more time to adapt.

    1. Old Engineer   2 years ago

      I think your points are valid, especially considering that many of the heads of companies developing AI have the tendency to either ignore Chinese crimes or praise the Chinese governing model.

    2. TD   2 years ago

      Yes, valid point. Established firms often prefer unfree markets if it hobbles existing or potential competitors and enables executives to relax a bit and leave at 3 for golf. The tobacco industry is still doing very well.

  22. Dillinger   2 years ago

    >>Doomsayers have a long track record of being wrong.

    hello kettle, you're black. ~~the pot

  23. TD   2 years ago

    Regulation often means outlawing or circumscribing certain activities. In the case of AI, just what activities do they want banned? AI can apparently write poetry. Do we outlaw that? Do we permit it to write some poems and not others? Would we, for example, ban it from writing sonnets but still permit limericks? No one ever says just what it is they don’t want AI to be able to do.

  24. Rich   2 years ago

    If any entity violates the A.I. moratorium, Yudkowsky advises, "destroy a rogue datacenter by airstrike."

    How about a practice strike against Fancy Bear or Cozy Bear to show us you're serious?

  25. EdWard   2 years ago

    My father grew up on a farm without electricity or running water. The toilet was outside - in Alberta! Transport and plowing was powered by horses. He lived to see people living on a space station, freeways and ubiquitous air travel among other marvels. The population of the world went from 2 billion to 7.8 billion in his lifetime, and the UN Food and Agriculture Organization says we produce too many calories for all those people. I'm disinclined to believe that the next big change will obliterate us.

    1. Diane Reynolds (Paul.)   2 years ago

      and the UN Food and Agriculture Organization says we produce too many calories for all those people.

      The big change coming is that governments are working to fix that.

      1. Old Engineer   2 years ago

        Starvation is a big disincentive to revolt. Look how well it works in North Korea!

    2. TheReEncogitationer   2 years ago

      Hurray! You win The Internet For The Day!

      Salute to you in The Great White North from a fellow Technophile/Transhuman/Singularitarian in the U.S.A.! You just said the wisest words here so far! 🙂

  26. Public Entelectual   2 years ago

    How many issues too late will Ron research the NPR and PBS War Against Gas?

    The campaign to ban gas cooking stoves hangs on finding tens of parts per trillion of benzene in kitchen air, but nobody has told the public that the study's largest cohort of houses are in the middle of America's most heavily developed oil field, in Bakersfield California.

  27. Me, Myself and I   2 years ago

    On the one hand, AI only does what it's programmed to do. LLMs such as ChatGPT, Dolly 2.0 and Falcon generate text when prompted. Image generators such as Dall-E and Stable Diffusion generate images when prompted. No-one's programming an AI to end the world.
    On the other hand, "loads of people said the world would end in the past, but it's still around" isn't an argument. Ever heard of survivorship bias? There are probably many species on other planets which ended because they didn't listen to the neysayers.

    1. TheReEncogitationer   2 years ago

      The first of your statements on AI is definitely correct. AI, like any computer or, indeed any human, operates on the principle of GIGO ("Garbage In, Garbage Out.")

      The second statement, eehh. While not a valid deductive argument, it is a valid inductive argument, as well as an argument against the credibility of doomsayer sources who propagate one doomsday scenario after another and give specific dates and times and are perpetually wrong. Garner "Ted" Armstrong, Hal Lindsey, Art Bell, George Noory, call your offices.

      Also, the assertion of dead, unheeding species on other worlds requires evidence of any species on other worlds. Must wait and see...

  28. Public Entelectual   2 years ago

    The NIMBY reveal is buried in the study's supporting materials o

    https://pubs.acs.org/doi/10.1021/acs.est.2c09289

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

Mothers Are Losing Custody Over Sketchy Drug Tests

Emma Camp | From the June 2025 issue

Should the
Civilization Video Games Be Fun—or Real?

Jason Russell | From the June 2025 issue

Government Argues It's Too Much To Ask the FBI To Check the Address Before Blowing Up a Home

Billy Binion | 5.9.2025 5:01 PM

The U.K. Trade Deal Screws American Consumers

Eric Boehm | 5.9.2025 4:05 PM

A New Survey Suggests Illicit Opioid Use Is Much More Common Than the Government's Numbers Indicate

Jacob Sullum | 5.9.2025 3:50 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!