As artificial intelligence systems—including bots that exist as nothing more than lines of code—become increasingly pervasive and autonomous, it's only natural to assume that their potential for unexpected and unwanted behavior is going to increase too. In short, writes Greg Beato, some robots are going to commit crimes. What are we going to do as a society when that happens?
Charging robots and other A.I. systems with crimes may seem absurd, Beato notes. And locking up, say, an incorrigibly destructive Roomba in solitary confinement sounds even more preposterous. But how exactly do we punish entities whose consciousness arises from computer code?