Policy

How to Survive a Robot Uprising

Seeing dark omens of catastrophe in new tech demos.

|

Rise of the Robots: Technology and the Threat of a Jobless Future, by Martin Ford, Basic Books, 352 pages, $28.99

Martin Ford, author of Rise of the Robots, doesn't like the recent increase in U.S. wage inequality. So he wants to tax the rich more, to fund a basic income guarantee for the poor. (But only the U.S. poor. Other poor don't seem to concern him.)

Maybe you think you've heard this story before. But Ford, a software engineer and businessman, doesn't argue that inequality is unethical or that it will destroy democracy. He instead argues that inequality will soon get much worse, so bad that most adults won't be able to find jobs. So bad the economy will descend into "catastrophe." And all because of robots.

Now, Ford wants to reassure you that he isn't crazy. He isn't one of those people who see robots with human-level intelligence coming soon and superintelligent terminators killing us all soon after. No, Ford just thinks that dumb robots specialized for particular jobs are quite enough reason to panic.

In the old days, if you wanted to scare people into action via fear of a coming catastrophe, you could point to most anything unusual as an omen: an eclipse, a sighting of a strange animal, a king dying young, perhaps even a new strain of music becoming popular. It helped if your coming catastrophe was something, like a flood or war, that everyone knew would come eventually-that it was a matter of when, not if.

Today, we know more about how the world works, so fearmongers can't just point to any aberration as an omen. But Ford's fears are thoroughly modern: all those new computer-based gadgets. Such things spook many people today, because super-robots come from a realm of futurist speculation that has landed with a plausible plop into the world we live in. A whole intellectual industry has sprung up to treat computer demos as dark omens.

Ford is correct that, like floods or wars, super-robots are likely to arrive eventually. That is, if our automation technologies continue to improve, it is plausible that in the long run, robots will eventually get good enough to take pretty much all jobs.

Rise of the Robots

But why should we think something like that is about to happen, big and fast, now? After all, we've seen jobs replaced by automation for centuries. Sure, there have been fluctuations in which kinds of jobs are more valued and which are most vulnerable to automation. Wage inequality has also varied. But why shouldn't we just expect these things to stay within roughly the same range of variation we've seen in the past? Workers found new jobs before, and the economy never imploded because of automation; more like the opposite.

Many have cried this wolf before. This isn't the first time people have been so impressed with new tools that they've warned machines may soon make us replaceable. Ford admits this, and pointing out how in the 1960s such people were top academics who attracted big press. In the 1980s, I was personally caught up in a similar wave of concern; I left physics graduate school to start a nine-year career researching artificial intelligence (A.I.).

Like many others today, Ford says this time really is different. He gives four reasons.

First, there is a 2013 paper by Carl Frey and Michael Osborne, an engineer and an economist at Oxford University, estimating that 47 percent of U.S. jobs are at high risk of being automated "perhaps over the next decade or two." Ford likes this paper so much that he mentions it in three different chapters. Yet this 47 percent figure comes mainly from the authors "subjectively" (their word) labeling 30 particular kinds of jobs as automatable and 40 as not. They give almost no justification or explanation for how they chose these labels. Such a made-up figure hardly seems a sufficient basis for expecting catastrophe.

Second, Ford thinks recent labor market trends are ominous. In the U.S., median wages have been stagnant and wage variance has increased since about 1970, while the labor share of income, the fraction of adults who work, and the wage premium for college graduates have all fallen since about 2000. Ford sees automation as the main cause of all these trends, but he admits that economists reasonably see other causes, such as changes in demographics, regulation, worker values, organization practices, and other technologies.

Third, Ford notes that the rapid rate at which computer hardware prices fall could let computers quickly displace many jobs, if we reach a threshold where many jobs all require roughly the same computing power. But while computer prices have been falling dramatically for 70 years, the job-displacement rate has held pretty steady. This suggests that jobs vary greatly in the computing power required to displace them and that jobs are spread out rather evenly along this parameter. We have no particular reason to think that, contrary to prior experience, a big clump of displaceable jobs lies near ahead.

And then there is Ford's fourth reason: all the impressive computing demos he has seen lately. This is where his heart seems to lie. He devotes far more space describing things like Google's self-driving cars and language translators, IBM's Jeopardy champion Watson, Baxter's flexibly programmable robots, and Narrative Science's software for writing news articles than explicating reasons one through three. Only rarely does Ford air any suspicions that such promoters exaggerate the rate of change or the breadth of the impact their new systems will have. (He is somewhat skeptical about the market for 3D printing and about prospects that self-driving cars will increase road throughput soon.) And of course several generations have seen A.I. demos with just as impressive advances over previous systems.

So basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It's as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now.

If a big burst of automation takes most but not all jobs, won't those who lose jobs to robots switch to doing jobs that robots can't yet do? After all, this is what we've seen for centuries, and it is the straightforward prediction of labor economics. But Ford says no, new firms like Google and Facebook have few employees relative to sales. As if Google's experience were some sort of universal law, Ford says, "Emerging industries will rarely, if ever, be highly labor intensive." Yet even if this turns out to be true, Ford doesn't explain why old industries can't hire more workers.

Moreover, even if workers could find new jobs, Ford still sees catastrophe if new jobs don't pay as much, increasing wage inequality. The economy will "implode," he says, because the rich just don't spend enough: "A single very wealthy person may buy a very nice car…But he or she is not going to buy thousands of automobiles…The wealthy spend a smaller fraction of their income than the middle class." Ford admits that increasing inequality since 1970 hasn't hurt spending, but he attributes this to increasing debt that can't last. (Yet that debt increase is small compared to the increased inequality.) He ignores the fact that the world economy had increasing wage inequality for centuries without imploding. Worldwide inequality has decreased only recently.

Ford eventually admits that "the global economic system" might "adapt to the new reality" via "new industries producing high-value products and services geared exclusively toward a super-wealthy elite." He calls this "the most frightening scenario of all," comparing it to the dystopian 2013 movie Elysium. In the end, it seems that Martin Ford's main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction.

After all, there isn't a fundamental connection between automation and wage inequality; in past eras more automation was associated with less inequality. If there's a connection now, it may be temporary and change again. More important, if we want to increase transfers because we dislike inequality, we don't need to discuss robots at all. It wouldn't matter why inequality is high; we'd just increase transfers when we saw more inequality than we liked. Or set up a system, like a basic income guarantee, to do this automatically.

So why didn't Ford just say this straight out? Perhaps because many others have already taken that direct route, but with limited success. It seems most people just aren't very bothered by current levels of inequality. So they need to be scared with something else.

If I'm not persuaded by Ford's omens, what would persuade me? Well, I take betting odds seriously. Since automation might reduce employment, I've expressed my skepticism about big automation progress soon by betting $1,200 at 12–1 odds that the Bureau of Labor Statistics' measurement of the labor fraction of U.S. income won't go below 40 percent by 2025. And since better computer software should increase the demand for computer hardware, I've bet $1,000 at 20–1 odds that computers and electronics hardware won't be over 5 percent of U.S. GDP by 2025. That's just me, of course, but more and bigger bets like these could tell us what people think when they are willing to put their money where their mouths are. It wouldn't cost that much to create prediction markets with prices that estimate these and a great many other important future events, estimates that are at least as reliable as those from any other public source.

I'd also like to see a time series of the rates at which jobs were displaced by automation in the past. If this rate were unusually high and rising, that would be an omen worth noticing. But if it's too hard to say which past jobs were lost to automation, what hope could we have of predicting which future jobs will be so lost?

Finally, trends in the rates of progress in robotic research are worthy of study. When I meet experienced artificial intelligence researchers informally, I often ask how much progress they have seen in their specific A.I. subfield in the last 20 years. A typical answer is about 5 to 10 percent of the progress required to achieve human-level A.I., though some say less than 1 percent and a few say that human abilities have already been exceeded. They also typically say they've seen no noticeable acceleration over this period.

If a more sustained study bears out those informal answers—and if that rate of progress persists—it would take two to four centuries for many A.I. subfields to (on average) reach human-level abilities. Since there would be variation across subfields, and since achieving a human-level A.I. probably requires human-level abilities in most subfields, a broadly capable human-level A.I. should take even longer than two to four centuries to emerge. Furthermore, computer hardware gains have been slowing lately, and we have good reason to think this will cause software gains to slow as well.

Perhaps my small informal survey is misleading for some reason; bigger, more systematic surveys would be useful, as well as more thoughtful analyses of them. We do expect automation to take most jobs eventually, so we should work to better track the situation. But for now, Ford's reading of the omens seems to me little better than fortunetelling with entrails or tarot cards.