Elon Musk

Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle

The Tesla and SpaceX founder "summons the demon" of regulation.

|


ElonMuskJoeDuffyDreamstime
Joe Duffy/Dreamstime

Artificial intelligence, or AI—the branch of computer science that aims to create intelligent machines—"is a fundamental risk to human civilization," declared Tesla and SpaceX founder Elon Musk at the National Governors Association's annual meeting this past weekend. "It's really the scariest problem to me." He finds it so scary, in fact, that he considers it "a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late."

The regulators' job, Musk said, would be to tell AI developers to "make sure this is safe and then you can go—otherwise, slow down."

This may sound reasonable. But Musk is, perhaps unknowingly, recommending that AI researchers be saddled with the precautionary principle. According to one definition, that's "the precept that an action should not be taken if the consequences are uncertain and potentially dangerous." Or as I have summarized it: "Never do anything for the first time."

As examples of remarkable AI progress, Musk cited AlphaGo's victory over the world's best players of the game of Go. He described how simulated figures using DeepMind techniques and rewards learned in only a few hours to walk and navigate in complex environments. All too soon, Musk asserted, "Robots will be able to do everything better than us." Maybe so, but in the relatively foreseeable future, at least, there are reasons to doubt that.

Musk, who once likened the development of artificial intelligence to "summoning the demon," worries that AI might exponentially bootstrap its way to omniscience (and shortly thereafter omnipotence). Such a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us. That might be a long-run risk, but that prospect does not require that we summon demon regulators now to slow down merely competent near-term versions of AI. Especially if those near-term AIs can help us by driving our cars, diagnosing our ills, and serving as universal translators.

Despite Musk's worries, there is no paucity of folks already trying to address and ameliorate any existential risks that superintelligent AI might pose, including the OpenAI project co-founded by Musk. (Is Musk looking for government support for OpenAI?)

If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.

Speaking of values, robotics researchers at the University of Hertfordshire are proposing to update Isaac Asimov's Three Laws of Robotics with a form of intrinsic motivation they describe as "empowerment." Empowerment involves the formalization and operationalization of aims that include the self-preservation of a robot, the protection of the robot's human partner, and the robot supporting or expanding the human's operational capabilities.

Humanity may avoid being annihilated by superintelligent AIs simply by ourselves becoming superintelligent AIs. The Google-based futurist Ray Kurzweil predicts that by the middle of this century we will have begun to merge with our machines. As a result, Kurzweil told an interviewer at South by Southwest, "We're going to get more neocortex, we're going to be funnier, we're going to be better at music. We're going to be sexier."

It is worth noting that Musk has founded a company, Neuralink, that could make Kurzweil's prediction come true. Neuralink is working to develop an injectable mesh-like "neural lace" that fits on your brain to connect you to the computational power and knowledge databases that reside in the Cloud. It would be a great shame if Musk's hypercautious regulators were to get in the way of the happy future that Musk's company aims to bring us.