Policy

Is It Time to Welcome Our New Robot Overlords?

From online shopping to drone surveillance, it's a brave new world out there.

|


If you're the least bit computer-literate, you've probably had the unnerving experience of searching for a product online—garden gnomes, for instance—only to find that once you've searched, seemingly half the advertisements you see on the web are pitching garden gnomes.

To those who grew up before the Internet, this looks like magic. To people growing up today, it looks like child's play. And in terms of artificial intelligence (AI), it is. For instance, retailers sometimes know a customer is pregnant before she has shared the happy news with friends.

A woman who starts buying prenatal vitamins or shopping for maternity clothes, for instance, might find herself receiving mailers promoting cribs and baby clothes.

As The New York Times reported a couple of years ago, retailers such as Target hoover up vast libraries of information about customers—from basic demographic data to details about brand preferences and charitable-giving histories (some of which are bought from third parties). Predictive analytics then help aim marketing efforts at the customers they will most likely persuade. Guy buys a riding lawnmower? Maybe he'd be interested in some topsoil and work gloves, too.

Technology can predict other things as well—such as card sharks and terrorist threats. Vegas casinos pioneered the use of non-obvious relationship awareness (NORA) and it proved so effective that the Department of Homeland Security embraced it after 9/11. NORA allows a casino—or the federal government—to mine data and discover, say, that four people who traveled to Vegas on different flights and took separate rooms at the same hotel all share an apartment back in Chicago. Hmmmm.

Since last year's marathon bombing, Boston has deployed a surveillance system of closed-circuit TV cameras throughout the city. The cameras feed data into a network that looks for anomalies. Wesley Cobb, the chief science officer for Behavioral Recognition Systems, says the network has "taught itself what to look for."

Surveillance can be unsettling, but it has little effect without human intervention. A security camera can notice a shoplifter, but it can't detain him. But what happens when human intervention becomes unnecessary, irrelevant or even problematic? That's the premise behind countless sci-fi flicks, from Terminator (1984) to Transcendence (now playing at a theater near you).

Script-writers aren't the only ones who noodle over such questions. Fear of runaway artificial intelligence motivated Ted Kaczynski, the Unabomber. Bill Joy, the chief scientist at Sun Microsystems, used Kaczynski's manifesto as the starting point for a famous article in Wired on "Why the Future Doesn't Need Us."

In brief, Kaczynski warned of a future in which "computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them." In that event, humanity either would cede control of daily life to autonomous machines, or humanity would cede control to a tiny technological elite in control of the machines. We can't know what autonomous machines would do, and even if a few people ran them, the elite would still control everyone else.

If that does not sound quite grim enough, consider the view of AI theorist Steven Omohundro. In a recent paper for the Journal of Experimental & Theoretical Artificial Intelligence, he says we can indeed know what autonomous machines would do—and it isn't pretty.

Advanced systems are, he says, essentially obsessive. They do only one or two things and they "want" to do them as well as possible—and they are uninhibited by other considerations, such as ethics. (To take a simple example, Microsoft Word doesn't care one bit if you're writing a love letter or a death threat.) Sufficiently intelligent machines might decide their own priorities take precedence over ours.

"When roboticists are asked by nervous onlookers about safety," Omohundro argues, "a common answer is 'We can always unplug it!' But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximization will cause the creation of the instrumental subgoal of preventing itself from being unplugged." Hello, Skynet.

Omohundro warns that unless carefully designed, intelligent machine systems will exhibit several antisocial traits: They will strive to protect themselves. They will seek ever-greater resources to achieve their aims. They will seek to maximize the scale on which they execute their core mission, including (potentially) self-replication. And they will interact with the world based on the stimuli that improve or impede their ability to do so.

This does indeed sound disturbing. But it does not sound all that new. Plants and animals do the same things. More to the point, humans already have created intelligent systems that do all those things as well: private corporations, political movements, religions, and—perhaps most of all—governments.

Our new robot overlords, should they ever arrive, might not prove any more benevolent. But given humanity's grim history, they probably won't turn out any worse.