Countering the Threat of the Malicious Use of Artificial Intelligence
Are smart Roombas booby-trapped with bombs in our future?

Artificial intelligence and machine learning are being embedded in more and more of the digital and physical products and services we use everyday. Consequently, bad actors—cybercriminals, terrorists, and authoritarian governments—will increasingly seek to make malicious use of A.I. warns a new report just issued by team of researchers led by Miles Brundage, a research fellow at the Future of Life Institute at Oxford University and Shahar Avin, a research associate at the Centre for the Study of Existential Risk at Cambridge University.
The new report "surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats." The researchers specifically look how A.I. in the next five years might be misused to cause harm in the digital, physical, and political realms. They also suggest some countermeasures to these potential threats.
The researchers begin by observing that artificial intelligence (A.I.) and machine learning (M.L.) are inherently dual-use technologies, that is, they can be used to achieve both beneficial and harmful ends. One retort is that this is a fairly trite observation since there are damned few, if any, technologies that are not dual-use, ranging from sharp sticks and fire to CRISPR genome editing and airplanes.
That being said, the researchers warn that A.I. and M.L. can excerbate security vulnerabilites because A.I. systems are commonly both efficient and scalable, that is, capable of being easily expanded or upgraded on demand. They can also exceed human capabilities in specific domains and, once developed, they can be rapidly diffused so that nearly anyone can have access to them. In addition, A.I. systems can increase anonymity and psychological distance.
The authors lay out a number of scenarios in which A.I. is maliciously used. For example, A.I. could be used to automate social engineering attacks to more precisely target phishing in order to obtain access to proprietary systems or information. They suggest that it will not be too long before "convincing chatbots may elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat."
In the physical realm they outline a scenario in which a cleaning robot, booby-trapped with a bomb, goes about its autonomous duties until it identifies the minister of finance who it then approaches and assassinates by detonating itself. Assassins might also repurpose drones to track and attack specific people. Then there is the issue of adversarial examples, in which, objects like road signs could be perturbed in ways that fool A.I. image classification, e.g., causing a self-driving vehicle to misidentify a stop sign as a roadside advertisement.
A.I. could be used by governments to suppress political dissent. China's developing dystopian social credit system relies upon A.I. combined with ubiquitous physical and digital surveillance to minutely control what benefits and punishments will be meted out to its citizens. On the other hand, disinformation campaigners could use A.I. to create and target fake news in order to disrupt political campaigns. A.I. techniques will enable the creation of believable videos in which nearly anyone can be portrayed as saying or doing almost anything.
What can be done to counter these and other threats posed by the malicious use of A.I.? Since artificial intelligence is dual-use, A.I. techniques can be used to detect attacks and defend against them. A.I. is already being deployed for purposes such as anomaly and malware detection. With regard to disinformation, the researchers point to efforts like the Fake News Challenge to use machine learning and natural language processing to combat the fake news problem.
The researchers also recommend red teaming to discover and fix potential security vulnerabilities and safety issues; setting up a system in which identified vulnerabilities are disseminated to the A.I. researchers, producers, and users with an eye to "patching" them; offering bounties for identifying vulnerabilities; and creating a framework of sharing information on attacks among A.I. companies analogous to Information Sharing and Analysis Centers in the cyber domain. The report concludes: "While many uncertainties remain, it is clear that A.I. will figure prominently in the security landscape of the future, that opportunities for malicious use abound, and that more can and should be done."
A just released report by the cybersecurity firm McAfee and the Center for Strategic and International Studies estimates that cybercrime cost the global economy $600 billion last year. Calculating how much the infotech has boosted the world's wealth is difficult, but one recent estimate suggests that digital technologies increased global GDP by $6 trillion. Another report predicts that A.I. will contribute as much as $15.7 trillion to the world economy by 2030. Clearly, with so much wealth at stake, it will be worth it for folks to develop and invest in effective countermeasures.
The New York Times reports the more sanguine take of digital technologist Alex Dalyac:
Some believe concerns over the progress of A.I. are overblown. Alex Dalyac, chief executive and co-founder of a computer vision start-up called Tractable, acknowledged that machine learning will soon produce fake audio and video that humans cannot distinguish from the real thing. But he believes other systems will also get better at identifying misinformation. Ultimately, he said, these systems will win the day.
Considering that humanity has so far wrung far more benefits than harms from earlier dual-use technologies, it's a good bet that that will also happen with A.I.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Well, look who finally got around to watching The Terminator.
It was so good. No way they could top that one!
Look who still hasn't seen Terminator 2.
Terminator 3: Rise of the Machines also exists.
You're a monster
I'm making over $7k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life.
This is what I do... http://www.onlinecareer10.com
I'm making over $7k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life.
This is what I do... http://www.onlinecareer10.com
Actually it sounds like that old Tom Selleck movie, I think it was called Runaway. Gene Simmons played the villain.
Then there is the issue of adversarial examples, in which, objects like road signs could be perturbed in ways that fool AI image classification, e.g., causing a self-driving vehicle to misidentify a stop sign as a roadside advertisement.
This is a very different concept from the other examples mentioned. I don't know if you used adversarial coincidentally or not but adversarial learning actually is an entire subfield of ML focused exactly on problems like you mentioned.
B: Check out link in the article.
It's a very interesting topic. And it goes a long way to show that even if CV has gotten a lot better, it truly does not appear to be doing something similar to what we would think of as "seeing". It's really highlighted how fundamentally different this technique is. It gives correct answers but does not do so in anyway that is easily correlated to how we as humans do it. It really brings up questions about what can be constituted as intelligence or thought.
We lease half of our office space to an AI marketing firm (the Lucy AI -- spawn of Watson) and I thought it would be very cool to pick their brains and hang out but they are strictly interested in only numbers-crunching and marketing data.
Are smart Roombas booby-trapped with bombs in our future?
I haven't bought a Roomba because everyone I know who owns one says it's the dumbest AI bot that has ever been unleashed on the unsuspecting public.
Sure, that is what they WANT you to think - - - - - - -
They're pretty terrible unless you live in a showcase house with huge open spaces and never leave anything lying around on the floor.
Seeing as my house is no tidier now then my dorm was in college, I don't think a Roomba is for me.
Sure it is.
The Roomba will eventually figure out all the stuff you leave laying around, and have Alexa order the proper storage units, and have a web based organizational specialists show up to put it all away. All you have to do is pay the bill when it shows up in your email.
There was a great Youtube video showing a Roomba running over a dog poop a puppy had left and spewing particles of poop all over the place. Not sure if it was real.
It gets the job done, but it's got Tony-level AI.
From what I've seen, Samsung's vacuum bots are much better. When my Roomba finally goes, i'll be looking at them.
The researchers specifically look how A.I. in the next five years might be misused to cause harm in the digital, physical, and political realms.
If Facebook, Instagram and Twitter are any measure, there's going to be a lot of mysteriously disappeared accounts in peoples' futures over entirely innocuous wall posts.
On the other hand, disinformation campaigners could use A.I. to create and target fake news in order to disrupt political campaigns.
I'm having trouble deciding if I'm surprised that $100,000 and a few Russian bots took out Hillary Clinton, or I should have expected $100,000 and a few Russian bots to take out Hillary Clinton.
$100,000 and a few bots cannot change anything. It is the fools who take anything on the web a fully researched and proved fact that disrupt political campaigns. They disrupt political campaigns by voting the way their stupid little social media brains have been programmed.
The good news is, the DNC now knows you can take out an opponent with $100,000 and a few tweets. Why they thought they needed to raise billions is beyond me.
I thought the Three Laws of Robotics was going to take care of these issues.
How soon we forget the distinction between fiction and reality.
Would a robot with a libertarian AI respect the three laws?
A libertarian AI would have it's own three laws: 1) the NAP, 2) Law of Supply and Demand, 3) Deport all organic beings.
Would it support RoboTrump and demand it "Build That Firewall"?
I think that the RoboTrump would become known as the Anode Rectifier (Rectumfrier?) in Chief, or perhaps the Diode Grabber in Chief, after salacious video footage would surface, of him / her / it having misbehaved, been a BAD robot, in earlier days!
After that, NO respectable AI, Libertarian or otherwise, would risk being caught endorsing the RoboTrump!
Put orphans to work.
The threat from AI is real, IMHO, but it utterly pales in comparison to...
Deliberate ("natural") human stupidity, usually in the form of hide-bound ideology!!!
What's with the robot fear lately? There was an article about "will robots be ethical?" on Deseret last week (or the week before) too.
Ah, yes, "Reason" in the 21st century: "here is a new technology, let's all panic over it, and consider more government regulation".
Hard to believe that this actually used to be a libertarian web site.
M: Where in the article did you read that I called for regulating AI?
You presume he read any part of the article.
This Reason-bashing is getting more and more ridiculous. Reason is actually one of the few places where I read anything other than total fear over new technologies.
Seems like everywhere else you go, all you see are articles about how we absolutely need to introduce UBI immediately because pretty soon NOBODY will have a job anymore when robots and AI take over everything.
Not once do these doomsayers mention all the telegraph operators who were permanently unemployed when telephones appeared, or all the radio show performers swept away by TV, all the typists and file clerks who died when computers were adopted by businesses, etc. Of course not, because none of these "journalists" ever crack open a history book much less remember any historical facts.
I did read the entire article. I still don't see the point of the article.
You're right: the article is slightly less panicky than HuffPo. That doesn't make it libertarian. And I still don't see the point of the article.
Where did I say that you "called for" regulating AI? You did nothing that clear and specific.
But, hey, maybe you can explain what the point of the article actually was.
M: Among other things to inform readers of the concerns of these AI researchers and why their concerns are likely overblown, e.g., the last sentence.
Ron, even I didn't read the article before jumping to the comments, and I had no presumption that you were arguing for government regulation. The title indicates that you'll be arguing that there is no reason to panic.
Maybe Mark22 misread, or just wrote his comment poorly by accident?
I don't know if it qualifies as an AI, but I thought the following was interesting:
https://www.popularmechanics.com /military/weapons/news/a27511/ russia-drone-thermite-grenade-ukraine-ammo/
A drone carrying a a ZMG-1 thermite grenade infiltrated an ammunition dump in Ukraine, setting off an explosion that caused an astounding billion dollars worth of damage. The incident points to the growing use of drones in wartime, particularly off the shelf civilian products harnessed to conduct sabotage and other attacks.
Just a personal gripe; the intelligence is real, not artificial.
"There are damned few, if any, technologies that are not dual-use, ranging from sharp sticks and fire to CRISPR genome editing and airplanes."
Well, yeah, which is why the "assault weapons" hysteria is so mindless. Any weapon, even if initially intended for self-defense can be an assault weapon if circumstances dictate that you must use it to assault someone else.