Greg Beato on What to Do When Robots Break the Law
As artificial intelligence systems—including bots that exist as nothing more than lines of code—become increasingly pervasive and autonomous, it's only natural to assume that their potential for unexpected and unwanted behavior is going to increase too. In short, writes Greg Beato, some robots are going to commit crimes. What are we going to do as a society when that happens?
Charging robots and other A.I. systems with crimes may seem absurd, Beato notes. And locking up, say, an incorrigibly destructive Roomba in solitary confinement sounds even more preposterous. But how exactly do we punish entities whose consciousness arises from computer code?
Hide Comments (0)
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post commentsMute this user?
Ban this user?
Un-ban this user?
Nuke this user?
Un-nuke this user?
Flag this comment?
Un-flag this comment?