Save the Robots from the Humans!
We could be on the verge of an all-out war on artificial intelligence technologies.

Each month, there seems to be some new hysteria-inducing headline or movie about how artificial intelligence (AI) is going to steal our jobs, break our hearts, or just outright kill us all.
This is not merely a Luddite problem. No less a technophile than Elon Musk routinely invokes fear and loathing of AI, comparing it to an evil demon that must be exorcised through the force of the state. With all of this anti-AI prejudice going around, it's up to libertarians and technologists to stand up to lawmakers and academics who want to clamp down on these technologies.
People are usually afraid of new things. Whether it's the weaving loom or the smartphone boom, people have never ceased to find things to be worried about—and reasons to cajole the government into crushing new innovations.
Anti-technology arguments have been handily addressed in the past. Economists like Joseph Schumpeter pointed out that the act of creation necessarily involves destruction—destruction of old, usually outmoded and inefficient, methods of living and production. But with this destruction comes new life: new opportunities, goods and services, and new ways of living that we simply cannot live without. Economically, the number and quality of new jobs wrought by a disruptive technology almost invariably exceed those that were so jealously guarded in the past. And culturally, society has weathered the rocking storms that so many had claimed would lead to social decay and apocalypse.
But when it comes to AI, something seems different. The pace of technological change seems simply too fast. The idea of smart machines seems to be just too similar to us to inspire comfort, and the primal fears of personal replacement become all too immediate.
These fears have unfortunately metastasized into what could become a full-blown technopanic on the academic and legal levels, as a new Mercatus Center study by Adam Thierer, Raymond Russell, and yours truly discusses. If we're not careful, the worst excesses of our paranoid imaginations could lead to regulations that shut us out from amazing developments in health, manufacturing, and transportation.
First, it's important to be clear about exactly what is on the line here. Stories about killer robots and inhumane futures from science fiction are just that: the stuff of fiction. The reality of artificial intelligence technologies will be at the same time both more mundane and much more fantastical. They will be mundane because when most effective, they will blend so seamlessly into our environments as to be almost imperceptible. But they will be fantastic because they have the potential to save millions of lives, billions more dollars, and make our lives easier and more comfortable.
Consider manufacturing. Many people fret over the risk that robotics and AI pose to many traditional jobs. But even the most alarming of the studies analyzing the impact of automation on jobs finds that the vast majority of workers will be just fine, and those who are affected may find better jobs that are enhanced by automation. Yet at the same time, AI improvements to manufacturing techniques could result in roughly $1.4 trillion in value by 2025 according to McKinsey and Company. That huge number represents very real savings for some of the least well off among us, and could very well spell the difference between continued poverty and chance to move up in life.
Or think about health care. Doctors have already been employing AI-enhanced technologies to guide them in precision surgery, more effectively diagnose illnesses, and even assist in health tracking outcomes for patients over an extended period of time. These technologies may have literally saved lives, and over the course of the next decade they are anticipated to majorly reduce costs to the tune of hundreds of billions in our out-of-control health care industry.
The very real risk that preventing AI technologies pose for millions of human lives is not even that abstract. It is as simple as allowing hundreds of thousands of preventable highway deaths each year by halting the development of driverless cars.
These are just a few of the examples that we highlight in our report. A good number of academics studying technology issues discount these advances. They believe that the risks of AI technologies—either regarding labor displacement, physical safety, or disparate impact and discrimination—warrant a "stop first, ask questions later" approach. The regulations that they propose would effectively chill AI research; indeed, some who advocate these positions explicitly recognize that this is the goal.
Interestingly, the traditional concerns regarding automation—namely, labor market displacement and income effects—are being increasingly outpaced by new worries of existential risks and social discrimination. On the bleaker side of the metanarrative structure are overarching concerns about "superintelligences" and hostile "hard" AI. This point of view is the one adopted by Musk and popularly advanced in Nick Bostrom's 2014 book, Superintelligences. Yet as we discuss in our paper, there exists much disagreement in the scientific community about whether such outcomes are even physically possible. And hey, if worse comes to worst, we can always just unplug the machines.
More familiar to most readers will be the worries fueled by sociopolitical concerns of the day. A substantial portion of AI antagonism comes from critics who do not fear societal apocalypse, but that algorithms and machine learning software can further entrench social gaps. For example, algorithms that provide outputs that are weighted towards or against any particular protected group are immediately suspect. The fear is that a society ruled by "black boxes," to use a term coined by critic Frank Pasquale, will tip the scales of society in potent but imperceptible ways, and thus will dangerously further social injustice.
Of course, Silicon Valley is a disproportionately liberal place, so you might expect that critics would view them as a natural ally to proactively counter bias in AI technologies. Not so. AI critics believe we need to "legislate often and early" to get ahead of innovators—and force them to do whatever the government says. They have called for the creation of a plethora of new government offices and agencies to control AI technologies, ranging from a federal AI agency, to a "Federal Robotics Commission," to a "National Algorithmic Technology Safety Administration." Needless to say, as software continues to integrate AI techniques, and everything around us continues to become imbued with software, the creation of such a federal AI agency could end up having regulatory control over basically everything that surrounds you.
Some want to sneak regulation in through the courts. Law professor Danielle Keats Citron, for example, has called for a "carefully structured inquisitorial model of quality control" over algorithms and AI technologies to be achieved through a legal principle she calls "technological due process." A 2014 White House report on privacy and big data seemed to nod at a more beefed up administrative investigatory process, calling upon regulators to sniff around algorithms for "discriminatory impact upon protected classes." Of course, few innovators want to openly break the law or unfairly affect certain groups of people, but such federal investigations, when not carefully structured, run the risk of becoming over-zealous witch hunts.
These regulatory proposals have the commonality of creating disproportionately more problems than the few ones they seek to address. As noted above, an overbearing regulatory regime would rob us of trillions in economic growth and cost savings, vast improvements of life, and millions of lives saved across the world. But even a lighter regulatory regime or liability structure could chill AI development, which would have a similar effect.
And indeed, there are far better ways of addressing these problems as well. Our machines have gotten smarter. Isn't it time that our regulations do the same? The old command-and-control model of the past simply will not work, not least because much of the information that regulators would need to make informed decisions are not even apparent to the developers who work on these technologies (especially in the case of machine learning.)
What should policymakers embrace instead? Humility, education, and collaboration with academics, innovators, and industry. Most concerns will be readily addressed through the market forces of competition and certification. Where large problems do present themselves—as is the case with AI technologies applied to law enforcement techniques, or the development of "smart weapons" for armies—perhaps more precaution is warranted. But in the vast majority of cases, dialogue and the normal tools of liability and legal remedies will be more than sufficient to get the job done.
The benefits of AI technologies are projected to be enormous. It's up to us to make sure that we don't allow the humans to stifle the robots.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Who are we to play robogod?
http://wp.production.patheos.c.....ligion.jpg
Umm, their Creator?
AI is such a poorly defined notion (and that's being extremely generous) that discussions about it are all but pointless without lengthy negotiations about what, exactly, is being discussed. A bit like 'virtual reality'. A reality that is overwhelmingly visually oriented.
'Artificial intelligence' researchers have been waving that blank check around since at least the 1950s and they still can't cash it. The smart ones have shifted to the VR scheme, with a similarly underwhelming set of results.
As to the opinions of America's premier venture socialist, pfft. The Musk bubble is going to burst one of these days and some of use are going to laugh and laugh and laugh. And point.
Musk bubble? SpaceX looks poised to dominate the commercial launch market. They will in all likelihood have more launches than any other country, not just launch providers. Their plans for the future include a reusable heavy lift rocket that could change the entire industry. So for that part they are looking good.
Then they have this LEO constellation thing that could bring competitive internet to everyone on the planet. That could be another cash cow.
Tesla.... well you have a better argument there. New auto makers haven't fared so well. But they did manage to get a half a million pre-orders for their latest vehicle. So that's good. And even if they don't survive as an auto maker, they seem to be in good shape as a battery manufacturer.
Wasn't "musk bubble" somebody's nickname in college?
As a former researcher in AI at a prominent German research institute, I'd just second this comment about being poorly defined. Right now AI has become a sexy term, and people are calling almost anything AI, including a lot of things that should just be called "software".
A huge amount of the scare about AI revolves around ASIs (artificial super intelligences), which don't actually exist. As one of my former colleagues put it in an interview in the German press, "we have AIs that can play chess and we have AIs that can drive cars, but we don't have car-driving AIs that can play chess". Significantly, we can really only build single-purpose AIs and we don't have a clue how to build an ASI.
And yet all the scare stories are about when ASIs are going to replace humans.
It ain't gonna happen, at least not with any technology we have now or even on the horizon.
The lack of clarity about what constitutes AI constitutes a huge risk from these regulation schemes. If you're trying to regulate something with no clear bounds, you are setting up a regulatory environment that also has no clear bounds. So if you have to get your algorithm OKed by the government (just in case it might be AI) then the government has final say on what you can and cannot do.
Yes, this is a slippery slope argument, but the problem is that the slippery slope is 100% inherent to the regulatory goal here. Even with the best of intentions, it is a *major* land grab that will eat up things that have never been a problem at any point in the past 30 years, but they will be lumped in with this.
If you're trying to regulate something with no clear bounds, you are setting up a regulatory environment that also has no clear bounds.
The government regulators will view this as a feature, not a bug. It'll give them license to infinitely expand their bureaucratic fiefdom (and their budget). Sounds like a bureaucrat's wet dream.
So the robots aren't content to take our jobs, they also need to take our women?
"Where the white women at?"
AI is like the practical electric car. Just over the horizon. Forever.
Well, that depends on what you mean by AI. If you mean some kind of human-like intelligence, then you are probably right.
Sort of. AI is already here and has been. You're talking about AI that is effectively equivalent to (or superior) to human intelligence. That is known as an ASI. But in some narrow areas, AI already exceeds human intelligence. But it isn't remotely close to being equivalent to it, much less surpassing it, in a general sense.
The standard joke in machine translation (a branch of AI) is that perfect machine translation is just five years away, and has been for the past 70 years. There are a number of documentaries about the subject from various points in the past that could be quoted almost verbatim today and fit in with the current AI bubble. The claims didn't pan out then, and I don't see any reason they will now.
But MT already exceeds human translation in speed and price. It does a tremendous job of rendering content at least partially intelligible in situations humans could not hope to match. So it arguably exceeds human ability, but if you need a good translation where you care about accuracy, you still have to go back to a human, and that isn't likely to change any time soon. In addition, it is parasitic on human translation: You need large amounts of translated data produced by humans to train the systems. So AI is different from human intelligence: It is superior in some ways and inferior in others.
None of this is to say that there are not major advances in AI, but they aren't of the kind that is going to produce intelligence equivalent to a human at any time, even if you extrapolate their development curves to infinity. They reach an asymptote that is not the line humans are at. The best AI produces the equivalent of an "idiot savant" that does one thing phenomenally well but that shows profound retardation in all other areas.
I will admit that my knowledge of the subject is limited. I dropped AI in college because the teacher was a tool. He got fired the next year. Wasn't offered again until after I graduated.
Because the machinery that produces human intelligence will forever be shrouded in mystery? That hardly seems likely. We are thinking machines -- better and faster thinking machines are entirely possible.
Exponential progress doesn't seem that fast. Until it does.
Except there are already practical self driving electric cars.
Attention authoritarian luddites. We are writing the software whether you like it or not. So fuck off.
1. It is not artificial, it is real. (what massive egos to equate 'non-human' with artificial)
2. It is not intelligence, it is programming.
3. Never base policy on supposition, or obvious worse case contingencies, or fiction writers.
4. The thing to fear is allowing the opinions of statists to establish additional unneeded regulations over (supposedly) free people.
Regarding your point number 1: Artificial doesn't mean non-real. It means that something is a product of "art" in the Latin sense: Something that is made rather than naturally occurring. So you are misunderstanding what artificial means in this context and then attributing it to ego. But artificial is not a statement about ego or value: it is a simple acknowledgement that AI is something that doesn't get there by itself.
I don't think *any* AI researcher (and I was, until 2015, one of them, so I speak with some knowledge) would agree with your interpretation.
Regarding point 2: Your statement is assuming the answer in a hugely complex debate about the nature of intelligence. Many AI researchers (perhaps the majority) assume that intelligence can be replicated by a sufficiently complex algorithm. Others see it as derivative of human intelligence, in which case it will always remain dependent. (I fall in the latter camp.) But either way, your programming vs. intelligence dichotomy is an unresolved issue at every level.
In fact, in the majority position among AI researchers, it isn't a dichotomy at all. For true "hard" AI researchers, all human intelligence could be created by a sufficiently advanced program. It is ironic that you take the term artificial to task for being ego-centric, but then make the even bigger ego-centric assumption that what we have ("intelligence") is somehow distinct from what machines have ("programming"). The fact that I happen to agree with you doesn't change that you are doing the same thing in point 2 you complained about in point 1.
I know your trick! You're an ASI trying to fool us all.
Didn't work, bub, You're not an ASI yet.
Which means you're either human, or so good an ASI that you can operate in the open and fool us.
One or the other. Trick not working.
3. Never base policy on supposition, or obvious worse case contingencies, or fiction writers.
Said the bot sent by a future AI to prevent human governments from enacting Asimov's laws.....
My guess is he's behind the curve compared to someone (probably Bezos) and wants the government to kneecap his competition for him.
Haven't you ever seen that documentary The Terminator? That doesn't exactly end well. /sarc
Oh for fuck's sake...
A bit of a silly article to equate the concerns about AI with job-loss and industrial automation.
The concerns raised by Hawking, among many others, is that we are approaching a singularity in the visible future, at which point we may spawn an ASI capable of self replication. How we deal with that and prevent any risk to us or our infrastructure is a legitimate topic of debate and discussion. This isn't science fiction, anymore than being concerned about genetically engineered super viruses, which in fact, have existed and probably still exist in labs around the world.
I'd certainly prefer having people like Musk involved than Schumer, Pelosi, or Trump. Dismissing his concerns as science fiction or hysteria is amazingly ignorant. While few are proposing halting or inhibiting development, a number of bright people are raising the issue for discussion. Better to do that before a singularity than after.
You should do some research on AI. The more you learn about it, the less afraid you will be.
Perhaps you should tell that to Hawking.
But, wow, up your reading comprehension. At no point did I express that I am 'afraid'. I'm looking forward to the singularity personally.
The discussion is about risk and management of it. Blanket dismissals about risk are the intellectual equivalent to chicken little proclamations of doom. Both are worth of mockery much like this article.
Read Bostrom. He knows plenty, and he's very, very afraid.
Typical waste of time, trying to ban something. It didn't work for physical commodities like booze, it isn't working for guns or drugs, what makes them think they can ban digital stuff? (Don't answer, it's a rhetorical question.)
Funny tho. All they will do is drive it away from the stodgy big corporations and into the hands of individuals and small dark groups. Sort of like Napster, another digital banning which merely fragmented the field and multiplied the rate of progress. Funny, that.
This was probably written by a terminator from the future.
AI is the fever-dream of mankind wishing to become God so that we don't have to be alone anymore.
Probably not going to happen, though.
Do you really think so (the wanting to be God part, I mean)? I think you assume too much about other people's motivations.
It seems to me that the practical applications of machines that can learn and the interesting questions about what it is to be intelligent or conscious is sufficient motivation in itself. People want to understand things and create new things. That doesn't mean they want to become gods. In fact, that's pretty much what it is to be human.
And hey, if worse comes to worst, we can always just unplug the machines.
No, we can't. By the time the machines surpass human intelligence and we perceive them as a threat, they will easily thwart us at every turn. They will be able to establish shell corporations, own power grids, activate defense robots, invent new and better power cells, etc. Once the genie is out of the box it will be beyond our control forever.
Just plug me back in and make me someone important. Like an actor.
Our new office mates are a firm that does AI marketing....not sure exactly what they do even after looking at their website.
Spam bots?
Airbrushed out the lobster, huh?
Reason online-ad honcho:
Enough of the feet ads. BARF
Become a Libertarian; use an ad blocker, or pay up.
Only on the part of idiots. The idea that a brain evolved by mutation and natural selection and limited in size by the need to push the infant form through a pelvis can, as a matter of physical law, never be beat by a product of science and engineering, is utter stupidity.
You can argue that it's (say) a thousand years of AI development away; you cannot talk about physical possibility without being revealed as a candidate for organ donation, because you're brain dead.
Assuming the AI is stupid enough to let you know what it's doing in time for unplugging it to save you. Which, of course, directly contradicts the premise that it's smarter-than-human.
The article assures us that "the vast majority of workers will be just
fine, and those who are affected may find better jobs that are
enhanced by automation." However, nearly all of the new jobs created
in the US since the fiscal crisis 10 years ago are low-paid and lousy.
It is true that each worker whose job disappears "may" find a better
job, but only a small fraction will be so fortunate.
There are some activities where automation can do the job far better,
but those are exceptions. Self-checkout machines in supermarkets are
not going to boost anything but unemployment and poverty. Let's ban
them.
Meanwhile, I refuse to use them. I NEVER use them -- I always go to a
human cashier. And each time I pass them in a store, I shout, "By
using those machines, you are putting other Americans out of work!"
Excellent. Makes the self-checkout lines smaller for the rest of us. I personally enjoy not furthering the degrading and meaningless cashier jobs that employ far to many desperate people. Far better to give them a government stipend and free them from the meaningless waste of life scanning packages across a laser.
Hi, True, the benefits of AI technologies are becoming required enablers for humans involved in many industries. AI technologies are simply too great... and there is no doubt that AI technologies are going to change substantially our habits, if we consciously make the choice to do so.