Even as Americans benefit from generative artificial intelligence (A.I.)—that is, A.I.s that can create text, audio, and video—both business leaders and the general public fear the technology will lead to catastrophe.
A survey of 119 attendees of the June 12 Yale CEO Summit found that 42 percent believe A.I. could "destroy humanity" in five to 10 years. A majority said the risks posed by A.I. are not overstated.
These fears are also common among the wider public. In a May 16 Reuters/Ipsos poll of 4,415 U.S. adults, 61 percent said they believe that A.I. could threaten the future of civilization and more than two-thirds are worried about artificial intelligence's negative consequences. The poll also found that three times as many people "foresee adverse outcomes" from A.I. than those that do not.
In an April 3 YouGov poll, almost 70 percent of respondents endorsed "a six-month pause on some kinds of AI development."
Those results followed calls by public figures for restrictions on A.I. development. Notably, a March 22 letter signed by several leading researchers and executives, including Elon Musk and Steve Wozniak, claimed that "AI systems with human-competitive intelligence can pose profound risks to society and humanity" and called for "AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Wall Street Journal columnist Peggy Noonan called that month for a "World Congress" to regulate A.I., arguing that the technology's creators are "generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions." During a May 16 hearing, the Senate discussed regulating A.I. with an agency similar to the Nuclear Regulatory Commission.
Technological change has been producing reactions like these for centuries.
"Its always been easier to imagine how you get machines, whether it's A.I. or many other innovations from the U.S. during the Industrial Revolution, to automate existing jobs than to imagine how they make workers more productive in those jobs and, more importantly, what new jobs might be created," says Jim Pethokoukis, a senior fellow at the American Enterprise Institute who frequently writes on A.I.
Science fiction has enhanced these fears, leading many to associate A.I. with deadly machines such as the ones in The Terminator. As Pethokoukis points out, "the default position by too much of society is that A.I. is going to be a dangerous technology and we need to do something about it."
"People vary in how much they fear new technology; new toasters or something like that doesn't trigger that much fear," explains Robin Hanson, an associate professor of economics at George Mason University. "But whenever a technology threatens to make fundamental changes to society, then people get much more scared. Think genetic engineering or nuclear energy."
While sci-fi-based fears undergird much of Americans' worries about A.I., the primary concerns relate to job loss. When a 2022 Tidio survey asked what people think are the least important worries about A.I., only 5 percent of respondents answered "AI taking our jobs." A full 52 percent said "AI taking over the world."
An April 5 Goldman Sachs report did predict that A.I. "could expose the equivalent of 300 million full-time jobs to automation" worldwide. But it also predicted that the technology would produce a 7 percent increase in global GDP. Similarly, a report from the McKinsey Global Institute finds that 60 to 70 percent of working hours could be automated. But that doesn't mean it will produce mass unemployment. Those freed-up hours give workers the opportunity to create wealth in new, unforeseen jobs. To block A.I. is to block those future jobs and the economic growth they'll bring.