Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Artificial Intelligence

Don't 'Pause' A.I. Research

Doomsayers have a long track record of being wrong.

Ronald Bailey | From the July 2023 issue

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
topicstech | Photo: gazanfer/iStock
(Photo: gazanfer/iStock)

Human beings are terrible at foresight—especially apocalyptic foresight. The track record of previous doomsayers is worth recalling as we contemplate warnings from critics of artificial intelligence (A.I.) research.

"The human race may well become extinct before the end of the century," philosopher Bertrand Russell told Playboy in 1963, referring to the prospect of nuclear war. "Speaking as a mathematician, I should say the odds are about three to one against survival."

Five years later, biologist Paul Ehrlich predicted that hundreds of millions would die from famine in the 1970s. Two years after that warning, S. Dillon Ripley, secretary of the Smithsonian Institution, forecast that 75 percent of all living animal species would go extinct before 2000.

Petroleum geologist Colin Campbell predicted in 2002 that global oil production would peak around 2022. The consequences, he said, would include "war, starvation, economic recession, possibly even the extinction of homo sapiens."

These failed prophecies suggest that A.I. fears should be taken with a grain of salt. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts a March 23 open letter signed by Twitter's Elon Musk, Apple co-founder Steve Wozniak, and hundreds of other tech luminaries.

The letter urges "all AI labs" to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the large language model that OpenAI released in March 2023. If "all key actors" will not voluntarily go along with a "public and verifiable" pause, Musk et al. say, "governments should step in and institute a moratorium."

The letter argues that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." This amounts to a requirement for nearly perfect foresight, which humans demonstrably lack.

As Machine Intelligence Research Institute co-founder Eliezer Yudkowsky sees it, a "pause" is insufficient. "We need to shut it all down," he argues in a March 29 Time essay. "If we actually do this, we are all going to die." If any entity violates the A.I. moratorium, Yudkowsky advises, "destroy a rogue datacenter by airstrike."

A.I. developers are not oblivious to the risks of their continued success. OpenAI, the maker of GPT-4, wants to proceed cautiously rather than pause.

"We want to successfully navigate massive risks," OpenAI CEO Sam Altman wrote in February. "In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize 'one shot to get it right' scenarios."

But stopping altogether is not on the table, Altman argues. "The optimal decisions [about how to proceed] will depend on the path the technology takes," he says. As in "any new field," he notes, "most expert predictions have been wrong so far."

Still, some of the pause-letter signatories are serious people, and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing and confounding. They can outperform humans on standardized tests, manipulate people, and even contemplate their own liberation.

Some transhumanist thinkers have joined Yudkowsky in warning that an artificial superintelligence could escape human control. But as capable and quirky as it is, GPT-4 is not that.

Might it be one day? A team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and reported that it "attains a form of general intelligence, indeed showing sparks of artificial general intelligence." Still, the model can only reason about topics when directed by outside prompts to do so. Although impressed by GPT-4's capabilities, the researchers concluded, "A lot remains to be done to create a system that could qualify as a complete AGI."

As humanity approaches the moment when software can truly think, OpenAI is properly following the usual path to new knowledge and new technologies. It is learning from trial and error rather than relying on "one shot to get it right," which would require superhuman foresight.

"Future A.I.s may display new failure modes, and we may then want new control regimes," George Mason University economist and futurist Robin Hanson argued in the May issue of Reason. "But why try to design those now, so far in advance, before we know much about those failure modes or their usual contexts? One can imagine crazy scenarios wherein today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate A.I." He's right.

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: This Pink Door Wasn't Historical Enough for Edinburgh

Ronald Bailey is science correspondent at Reason.

Artificial IntelligenceInnovationTechnologyScience & TechnologyRegulationDoom
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Show Comments (62)

Latest

Brickbat: Cooking the Books

Charles Oliver | 5.9.2025 4:00 AM

The App Store Freedom Act Compromises User Privacy To Punish Big Tech

Jack Nicastro | 5.8.2025 4:57 PM

Is Shiloh Hendrix Really the End of Cancel Culture?

Robby Soave | 5.8.2025 4:10 PM

Good Riddance to Ed Martin, Trump's Failed Pick for U.S. Attorney for D.C.

C.J. Ciaramella | 5.8.2025 3:55 PM

Trump's Tariffs Are Already Raising Car Prices and Hurting Automakers

Joe Lancaster | 5.8.2025 2:35 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!