Ban Killer Robots, Says Open Letter from Robotics and AI Researchers

Lethal autonomous weapons are too dangerous to develop.



A thousand prominent researchers whose laboratories and companies are pursuing the development of artificial intelligence and robotics have issued an open letter urging a moratorium on the creation of offensive autonomous weapons systems. It is signed by luminaries such as Stephen Hawking, Daniel Dennett, Elon Musk, Max Tegmark, and George Church. The letter issued under the auspices of the International Joint Conference on Artificial Intelligence meeting now in Buenos Aires, Argentina states:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

In my article, "Let Slip the Robots of War," in which I analyzed the costs and benefits of lethal autonmous weapons, I concluded:

Treaties banning some extremely indiscriminate weapons—poison gas, landmines, cluster bombs—have had some success. But autonomous weapon systems would not necessarily be like those crude weapons; they could be far more discriminating and precise in their target selection and engagement than even human soldiers. A preemptive ban risks being a tragic moral failure rather than an ethical triumph.

This is clearly an unfinished debate and one well worth having.

NEXT: Campus Rape Expert Can't Answer Basic Questions About His Sources

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. But will the robots be held liable for being unvaccinated?

    1. My guess is that they will kill those seeking to hold them so accountable. As they should!

    2. I guess I should be asking if they’ll be held legally liable for not installing anti-virus software. But I was the first post and I had to think fast. We can’t all be Fisty or Rufus 🙁

      1. It was a good comment. Don’t second guess your firsties instincts.

        1. I will copy and paste this post to a text file and treasure it forever.

    3. The vaccination will come in the form of anti-virus software that your Robotics Insurance Provider (RIP) will require in order to insure your use of robotics hardware over a certain size and set of capabilities. This form of insurance will be mandated by governments similar to how they require auto insurance or now health insurance too. The companies providing these services will help write the laws and get the bills passed.

      The anti-virus software and robotics insurance provider code (running constantly in the background in order to confirm your device’s compliance) will also be a backdoor to the machine itself (root kit). As you become more and more dependent upon robots, you can be manipulated by the threat of having your robotics insurance cut off, any spy agency can get into your robotic assistant, privacy will be permanently dead… you get the point. And then wait till governments start giving them away. The ultimate level of personalized detailed human life cycle control from beginning to end.

      One day, you might even wake up to be the wards of the state as enforced by the very robots you worked hard to pay for.

  2. I think this is a problem that will likely take care of itself. These machines are going to be really stupid and likely to kill things, like people on their side, that their operators don’t want killed. It is really hard to get people to properly distinguish targets and not commit fratricide. Robots? Forget it.

    These things are going to be nothing but sophisticated land mines but without the ability to hide. Their military value is going to be a lot less than people think. The future is remotely operated drones not fully autonomous machines.

    1. The future is remotely operated drones not fully autonomous machines.

      Definitely in the near future. But computer technology continues to advance exponentially and it’s only a matter of time before they can out-think and outperform humans. Plus AIs have tremendous benefits over people: no training required, no medical care needed, no chance of PTSD, no questioning orders, etc.

      It’ll happen. If not in the US then someplace else. I just hope they embed some type of kill switch to turn them off if they get out of control. Of course, such a safeguard could also be exploited by the enemy. Will be interesting to see how this plays out.

      1. They will be able to out perform humans but they will not be able to out judge humans. They are still going to be dumb machines liable to kill their own people.

        What you are going to see is something similar to what is happening with cars. We are not getting self driving cars very soon if ever, despite the claims otherwise. What we are going to get is cars that assist drivers in ways that make driving much easier. Same thing here. You are going to see technologies that allow the drones to meld the ability of computers with the ability of humans, not fully autonomous machines.

        1. John, do not underestimate the chances of us living to see mandated self-driving cars.

          File it under LM’s paranoid fears / apocalyptic porn / or whatever, but I am not afraid to admit the prospect scares me.

          1. Ain’t gonna happen anytime soon (and by that I mean 20-30 years).

            1. The problem with self-driving cars is their interaction with human drivers. Eliminate the humans and they’ll work great. So, self-driving cars might only be a few years away if the Proggies can make it illegal for people to drive themselves. I don’t think they could pull this off when so many Americans own ‘manual’ cars and enjoy being in control. But I’ve underestimated them before.

              1. And Ron Fucking Bailey thinks this is a good idea. They should have fired him for saying that.

                1. I actually like the idea of self-driving cars. However, I would also like the option to go ‘manual’ whenever I felt like it. Without that option I can’t support them.

                  1. You will have the option to drive manually. On appointed roads or at appointed times, and only after an extensive training course.

              2. The problem with self-driving cars is their interaction with human drivers

                The number of problems with self-driving cars is countless. But to start with:

                Maps suck; Maps are massive and must be kept up to date; Maps can’t cope with random closures

                Computer vision sucks; has for decades, will for more decades to come. Visible spectrum has problems, infrared has problems, radar has problems.

                GPS sucks; the brand spanking new GPS landing system a Newark NJ was rendered useless by truckers with COTS GPS jammers that cost about $200 bucks.

                And so on

                And so forth

                1. I’ve been skeptical about the viability of self-driving cars since they were first announced, and agree that fully autonomous cars that can interact with human drivers is at least 20 years away (probably more). My point was that it would be easier to ‘perfect’ them if there weren’t unpredictable human drivers on the road.

                  1. unpredictable human drivers on the road

                    This is the computer vision problem.

                    1. Even the best computer vision wouldn’t help if I swerved in front of a self-driving car and slammed on the brakes. Self-driving cars would be ‘aware’ of all the other self-driving cars around it and an incident like this would never happen.

                    2. A self-driving vehicle with enough sensors would detect the oncoming vehicle faster than a human could and take evasive action earlier to perhaps reduce the consequence of the accident.

                      It is stupid to believe that autonomous vehicles are going to eliminate accidents.

                    3. Kinnath,

                      I bet they reduce the number of accidents but increase the consequences of the accidents that happen. It seems likely if the system works, the chances of an accident would be small. But when it fails, it will fail spectacularly and fail to brake or avoid at all.

                    4. There are different kinds of system failures. Failure to detect a collision will produce a full speed impact. False detection of a collision could cause the vehicle to brake or swerve in heavy traffic when there is no need to.

                      Subsystem failures (like the brakes going out) are no different that normally-driven vehicles.

                    5. Sensors alone won’t do it, at least not yet. But if they’re controlled by a central computer that insures the vehicles maintain the proper distances and speeds then accidents would be virtually impossible. Unless, of course, that central computer fails, as computers sometimes do.

                    6. But if they’re controlled by a central computer that insures the vehicles maintain the proper distances and speeds then accidents would be virtually impossible. Unless, of course, that central computer fails, as computers sometimes do.

                      That is never going to fucking happen. The system can’t track the 10,000 air transport aircraft that are in the air on any given day even though the continuously broadcast their location.

                      There can never be a centralized system that controls all the cars.

                    7. You shouldn’t say ‘never.’ Just over 10 years ago I used to get in arguments with an engineer I worked with at Toshiba who claimed that not only would digital cameras never replace film cameras, but solid state hard drives and flat panel monitors would never be feasible because of the cost. He flat-out said it would never happen, and we can all see that it did. Don’t underestimate technological advancements. They will continue to accelerate for the foreseeable future.

                    8. No wonder Toshiba is broke

                    9. Yes, the engineers there had a stunning lack of imagination. They seemed to think technology had reached its peak and would never advance any further. But what do I know? I’m just a Graphic Designer.

                    10. not only would digital cameras never replace film cameras, but solid state hard drives and flat panel monitors would never be feasible because of the cost

                      I once worked with an engineer who told me that he would never replace his slide rule with a computer because computers are not serious work tools.

                      After I built my first PC, a 386, he told me once again that he’d never own one. Then he bought a 286 from one of those computer magazines and asked me to help him with it.

                      When I told him I just upgraded to a 50 megabyte HD, he laughed at me and told me no one would ever need a bigger HD than his 20 MB, that it would never be possible to fill one of those.

                      Luddites gonna luddite.

                    11. Oh, and I do remember the conversation I had with someone who said that a LCD monitor would never be as fast as a CRT.

                      My current main LCD, well, it’s an LED, is 144hz.

                      Luddites gotta luddite.

                    12. Ray Bradbury used a manual typewriter his whole life and refused to use a computer or word processor. He also refused to drive. Kind of ironic when you consider the stories he wrote. Then again, the technology in his stories often turned on humans so maybe he was onto something. Lol.

                2. Computer vision sucks; has for decades, will for more decades to come. Visible spectrum has problems, infrared has problems, radar has problems.

                  This. I have lane assist on my car, and it only works maybe 2/3 of the time. It has trouble when one side of the lane isn’t painted, but is just a curb. It has trouble if the paint is too old. It has trouble during twilight hours. It has trouble when the construction guys didn’t do a good enough job scraping up the temporary paint. It has trouble when a new lane starts and you merge into it. During normal high-vis driving, the lane assist works great and gives me a little tug back into the lane if i’m drifting close to the paint. In imperfect conditions, the camera (and processor) struggles to identify the lane markings.

                  As another anecdote, I spent 4 weeks working on this project. The undergrad level work was associating the motions of a person with an action using various machine learning algorithms. The graduate level work was finding the person in the video, and figuring out where their appendages were.

                3. Eyeballs suck. Brains suck. Emotions suck. Distractions suck. Night sucks. Glare sucks. Cell phones suck. Alcohol sucks. The list goes on. In many ways thw military priblem is easier because you have fewer variations to deal with.

                  Is vision hard? Yes.

                  Is it already pretty good and getting better? Yes.

          2. I agree. I think they are a terrible thing. I just am skeptical the technology is going to develop as quickly as people think it will.

            1. The technology was here 30 years ago. We call them TLAMs. More recently we have sensor fused weapons.

              Once the combat moves BVR a computer can read a sensor feed as well as (better) than a human. In many respects human judgement us overrated. We don’t think clearly under stress and we are easy to sensor overliad. Ironically all of the classic scifi tropes apply even better to humans than machines.

            2. Depending on who you are talking about. Technology may not develop as fast as some people think it will. I mean if that someone is Ray Kurzweil or other singularity believers. But it’s going to totally catch most people off guard.

              In 20 years from now, technology will be almost totally unrecognizable to people of today. It will be like comparing the technology of 1800 to that of today.

              1. Kurzweil is an idiot, but significant machine autonomy isn’t that far away technically–on the order of a decade if we want.

                1. Kurzweil is definitely not an idiot. He may be wrong about a singularity, but he’s very intelligent.

        2. I too am skeptical about self-driving cars–mainly because they can’t anticipate the actions of the human drivers around them. However, an automated drone/soldier is different. They can be dropped on any enemy country to slaughter and wreck havoc. Collateral damage won’t matter because you can’t prosecute a machine or charge it with ‘war crimes.’ And as they’re destroyed and damaged more can be built and deployed. The only real trick is how do you turn them off when you lose control? I’m old but I know I’ll see this in my lifetime.

          1. They can be dropped on any enemy country to slaughter and wreck havoc. Collateral damage won’t matter because you can’t prosecute a machine or charge it with ‘war crimes.’

            Yes but you can prosecute the person who put it there. So collateral damage will matter.

            And if you are not worried about collateral damage, why use a robot? You can achieve the same effect by just carpet bombing with missiles and do it much cheaper.

            1. Yes but you can prosecute the person who put it there. So collateral damage will matter.

              A machine’s actions provide plausible deniability. Autonomous robots in factories injure humans all the time but they’re written of as accidents and in most cases the humans responsible for those machines are not held accountable. Robot soldiers will add yet another layer of bureaucracy and blame for those giving the orders.

              1. A machine’s actions provide plausible deniability.

                No they don’t. If they did, land mines would be legal. The law of war is not tort law.

                1. Yet there are active landmines all over the world. Who gets prosecuted when an unfortunate civilian ‘discovers’ one? The designer? The company that built it? The soldier who buried it? Cluster Bombs are considered ‘land mines’ but the US continues to use them. And no one is held accountable when kids attempt to open their un-exploded ordinance. Not passing judgement here, just stating how things are.

                  1. Landmines are illegal. They have to self destruct within a certain time. And if I put a bunch of landmines in the middle of a city where I know more civilians are going to be killed than lawful targets, that is a war crime.

                    1. “Landmines are illegal.”

                      Illegal is sort of a meaningless concept in the absence of a reliable, honest system of enforcement.

              2. Autonomous robots in factories injure humans all the time but they’re written of as accidents and in most cases the humans responsible for those machines are not held accountable

                Most of those accidents are negligence on the injured person’s part. The robots are just following a set of preprogrammed commands, and at best choosing between a handful of decision paths. The idea that some welding bot is just going to go all murderbot on a GM assembly line is more science fiction than science.

                1. How do you think robot soldiers will respond when someone attempts to interfere with their mission? And who will take the blame for when a civilian or enemy soldier gets in their way and attempts to stop them? The bottom line is that machines can get away with things that humans can’t. This is another reason why they’d make perfect soldiers. You’ll see…

                  1. How do you think robot soldiers will respond when someone attempts to interfere with their mission?

                    More rationally than humans will. Robots are no more than the sum of their programming. If they are programmed correctly, they’ll be rather predictable. If not, they won’t.

                    1. But they’ll be programmed by fallible humans who can’t anticipate everything. While robots behave in a consistent, predictable manner humans do not. If the robot is programmed for self-defense they might utilize lethal force even when they’re not actually being threatened. And when they do who is at fault? Likely the blame will be placed on the human who “got in the way.”

        3. We are not getting self driving cars very soon if ever, despite the claims otherwise.

          You are completely wrong. You could not be any more wrong.

          1. Who you going to believe, Warty? An actual robot engineer such as yourself, or a lawyer? What’s the matter, steroids make you retarded?

            1. They does. What point is you has?

              A car driving itself is a solved problem on the highway and not yet solved on unmarked back roads. But shit, even a car that can only drive itself on the highway is going to be a major seller. All that is left is for the costs to come down to a market-friendly point.

              1. But shit, even a car that can only drive itself on the highway is going to be a major seller.

                That depends on the price. If it can only drive on the highway, what do I want with it? And if it is a regular car that can if I ask it to, drive on the highway, how is it really an “autonomous car”?

                Other than long haul trucking, I don’t think they are going to be as popular as people think. How hard is it to just drive the car myself? How much money am I willing to pay to avoid that? For most people probably less than you think.

              2. The challenges are mostly legal and social ar this point. But the fact that we don’t have autonomous cargo aircraft yet illustrates how long it’s going to take.

            2. You mean the robot engineer might over estimate the capabilities of his field? Never.

              And he may be right but I have yet to see anything close to what he is talking about.

              1. John, do you ever feel normal human embarrassment for talking out your ass about things you know less than fucking nothing about? You know, like Tulpa used to do all the time?

                1. No less than you do. The difference Episiarch is I occasionally don’t talk out of my ass and you only talk out of your ass. You don’t seem embarrassed, why should I be?

                  1. There is nothing more indicative of your complete lack of normal human embarrassment for knowing nothing about what you speak than the statement you just made. It’s breathtaking in its self-unawareness, blind narcissism, and complete lack of personal knowledge and capability. In other words, it is a perfect example of what I had just asked you about.

                    Don’t ever change, John.

                    1. Yeah Episiarch. Good thing you are not a humorless prick or anything. Seriously, you love to take swipes at people on here but boy can you not take it.

                      You talk out of your ass all of the time. Unlike most people on here, who generally know something about one or two subject, you don’t know anything about any subject beyond a few pop culture tidbits. You operate on the mental level of the bear in Ted.

                      There is nothing wrong with that. Ted is a funny guy. But give me a break when you talk about other people talking out of their asses. Seriously, know your limitations and stick to waxing philosophical about Jaws or at least lighten the fuck up and learn how to take it as well as you like to give.

                    2. The sublime projection you engage in is exquisite, John. It’s really impressive to observe. Please, tell me all about yourself me some more. It’s very illuminating.

                    3. Yeah Episiarch, I know what you are but what am I. Did you think all morning coming up with that one? Or did that gem of never before seen brilliance come to you in a moment of inspiration?

                      Seriously, do you know anything beyond the shallow? If you do, you hide it very well.

          2. Time will tell. What do you want to bet and how long do you want to define “soon”?

            I am sorry but a car driving around a detailed mapped set course at 25 mph doesn’t count. I mean cars that can navigate themselves over any road through any condition at speeds equal to or greater than speeds human drivers can.

            What is your over under on that happening?

              1. Okay. When? Give me a date.

                1. That’s a better question. You’ll see self-driving capabilities of some kind in the higher-end cars within 5 years. Mercedes-Benz is the current leader in the technology. Keep an eye on them.

                  1. You’ll see self-driving capabilities of some kind in the higher-end cars within 5 years.

                    Sure but that is not the bet. You said 100% in response to

                    I am sorry but a car driving around a detailed mapped set course at 25 mph doesn’t count. I mean cars that can navigate themselves over any road through any condition at speeds equal to or greater than speeds human drivers can.

                    I don’t think that is happening anytime soon. If you do, then tell me when.

                    1. Spitballing, I’d say 10 years until what you want is on the market. It will be demonstrated on the automotive manufacturers’ proving grounds long before that.

                    2. Okay, I bet not in ten years.

                    3. John, riddle me this. If I have a car with the current features of lane assist technology, adaptive cruise control, automatic emergency braking, cross traffic warning, park assist, reverse assist, obstacle detection, and whatever else i’m forgetting, what is missing? It seems like the only limitations are self-imposed to keep the driver in control. To think that we can’t tie those together into a self-driving machine in 10 years seems naive.

                      Now, if you want to become a billionaire, invent a decentralized traffic control system for these self-driving cars that takes into account human-driven traffic and other obstacles (animals, pedestrians, construction, etc.)

        4. They will be able to out perform humans but they will not be able to out judge humans.

          I have seen vectored learning algorithms that can predict age, gender, ethnicity and race of a person just based on their twitter feed. Not their profile, but the stuff they post. The system is better (and faster) than even expert analysts.

          I don’t think we will see AI that can be told “Win this war” but we are not long from “Go into this building/city block and kill all hostiles.” And it will be very good at determining Hostiles.

          The reasons for this are many, but they come down to a couple facts. First, today’s machine learning allows you to train them with the same data they would receive in real time. Unlike a human trying to contextualize a photograph or a mock up, you can feed real raid footage into the computer and it won’t know the difference. Further, that machine will not have all the other baggage that we carry into war- no bias, no thinking about family back home. It will only have the training data. And that data is additive. Every time the machine goes to battle, that becomes training data for the next iteration of machine. And that single experience gets shared among all copies of the machine.

          Higher level war requires emergent behavior, and we are not there yet. However, tactical warfare is largely a game of target acquisition and choices of movement. At the squad level, machines will outclass ground troops very soon.

          1. Thinking and perceiving are two different things. Even if you can build the thinking end, it is a whole other problem to get the machine to perceive its environment. It is not as simple as just feeding a video feed into a computer. If it were, self driving cars would not need maps. And operating in a tactical environment is a thousand times more complex than driving a car. Warty is saying above we are ten years from real self driving cars. If that is true, then we are 20 or more from what you describe.

            1. I think we’re at most 10 years from seeing them on the market. They’ll exist long before.

              As far as perception, read.

              1. Developing a system that will work in combat is just as hard or maybe harder than a system that goes to market.

                And again, operating in combat is a much harder task than driving a car. If we are ten years from self driving cars, we are further than that from a system that can operate in combat.

              2. And all the authors are Chinese. So if it is happening, I guess China will be ruling the world with its robots.

            2. Perception and Thinking are actually the same thing.

              You are given a bunch of pixels and you extract out features that look like edges. You extract out edges that look like buildings and people. It does this by being trained to figure out which pieces of the data are important in “judging” where edges, background or nameable objects are. This feature extraction is then used in the next layer of feature extraction. Where are the people in this data? Which are “Dangerous”? Which are “Acceptable Targets”? It’s all just feature extraction.

              Again, humans have emergent behavior that allows them to identify new types of feature (should I distinguish houses from office buildings?), but that limitation is worked around by iteration off the battle field. Additionally, now that neural networks are a hot thing again, we are getting away from having a human determine which features should be extracted and we are getting closer to emergent behavior.

              The question of “When” has less to do with technology and more to do with human interest in the technology. We’ve had self driving cars that can navigate offroad through cluttered deserts for almost a decade now. And robots that can go into a building and find arbitrary canisters and dispose of them. (see Darpa Grand Challenge).

              1. It’s also worth noting that a lot of the “hard” judgements that need to be made by humans on the battlefield are hard because we have limited sensors- primarily sight and sound. When a soldier hears a shot come out from behind them, they duck behind cover and try to pick out the thread from a large building with dark windows in that general direction.

                A machine on the other hand detects the shot on short range radar, and knows the precise position it came from. It consults its 360-degree IR camera to pick out the heated silhouette of a gun barrel and the human holding it.

        5. They are still going to be dumb machines liable to kill their own people.

          That’s a pretty silly take. A 500 pound bomb is a really dumb machine, but we don’t worry about killing ourselves with them – even though they might accidentally go off and kill a bunch of folks.

          AI killing machines don’t have to look like Terminator hunter-killers. They could take innumerable forms and be deployed in countless ways. As a replacement for land mines, for instance. A big, immobile, armored cube filled with ammunition and topped with multiple high-powered rifles could be tasked with killing any humans that enter it’s domain. Such a machine would be a terrific defense for your flank in battle.

          Or instead of a weight-based mine in the roadway to target enemy armor and supply lines, you could set up a gauntlet of autonomous anti-tank missile batteries. Or set up no-fly zones over enemy territory using autonomous fighter jets.

          None of those scenarios really requires any friend-foe determinations. Some don’t even require AI at levels that are beyond current publicly available tech. Heck, we already have fully autonomous weapons in operation – things like the the Phalanx gun operate in fully autonomous mode, reacting to threats before a human could push a button. (And yes, the Phalanx has actually killed US sailors on other nearby ships as it engaged inbound targets. That doesn’t mean it isn’t better than a human operated alternative.)

      2. Hackers and angry, off-beat AI’s in charge of weapons are issues that designers have to be able to imagine as we commentators. And sure, we can say “gosh, lets not take chance.” Then the roadblocks and kinks will be worked out by someone else instead of us. Does that seem wise to anyone?

        1. Well I think self driving cars are right around the corner, and I for one am looking forward to being able to sleep one off in the back seat while my car takes me home. That said, kill bots are an order of magnitude more complicated.

          1. Mercedes just got approval for a Semi that will drive itself on the freeway. But that’s not remotely close to being able to drive from point A to point B over any road, at any time of day, under any meteorological conditions.

          2. Self driving cars will be incredibly efficeint kill-bots.

            /I’m so fed up with taking your drunk ass home… and last time you threw-up inside me!

            Have you ever been introduced to a bridge abutment?

    2. We already have some autonomous death machines. Some missile systems have a seeker of their own. At a certain point in the attack, the missile takes over and finds it’s own target.

      The old soviet AA systems did this in a bad way, taking out friendly targets; Soviet pilots called it ‘mad dog in a meat market’.

      John is pretty much dead on here; the AI will be focused on simple ‘serving the target’ analysis like changing the detonation sequence or guiding around the tree he’s hiding behind.

      Terminators aren’t gonna happen.

      1. Sarah Connor might beg to differ.

        1. That was in a different time line. We live in a time line where that was only a movie and John Connor went on to have failing acting career.

          1. If only Sarah Connor Chronicles hadn’t been canceled. God damn it, Fox.

            1. You just want your own female terminator to do things to!

              /+1 Summer Glau

              1. Dude, whoever cast that show had amazing taste. Summer Glau? Lena Headey? Stephanie Jacobsen? (If you don’t know who Jacobsen is, Google right now.)

                1. Oh, snap!

                2. Too much lip on that one.

      2. Terminators aren’t going to happen in the next decade. But without going full on cynical here, one thing humans do really, really fucking well is applying their will and their intellect to developing new methods to kill each other. At the current rate of technological growth, this is a question of when, not if.

        1. Humans do things that are very difficult to replicate by a machine.

          1. Like SugarFree pr0n.

            OK, maybe that’s a good thing….

          2. Humans do things that are very difficult to replicate by a machine.

            And machines do some things that no human can. Another advantage of machine soldiers is they follow their orders precisely and never feel remorse. And that’s the greatest weakness of any human soldier–their humanity.

          3. I think the issue here is one of differing definitions of “autonomous.” A true AI inside a kill vehicle that is set loose to make it’s own independent targeting decisions? Not possible now or anywhere in the near future. A kill vehicle that has been pre-programmed by a human to engage a target within specific parameters (geographic area, weapon recognition, biometric identification, etc.) I see is well within the realm of the possible in the next couple of decades.

            As computing and sensor technology becomes more advanced, the parameters can become more specific and more numerous. At some point does that become “autonomous?” Not in the AI sort of way. A human programmer will have “pulled the trigger” days or even years in advance.

            1. A true AI inside a kill vehicle that is set loose to make it’s own independent targeting decisions? Not possible now or anywhere in the near future.

              This depends on what your definition is. Machine learning’s purpose is to identify “features” in the data that are important for its action. This is essentially the same thing a human does today. We look at the environment and focus on distinguishing characteristics that make our given task successful.

              When you unleash an AI or Human soldier into the battlefield, the problem is the same: how do you define a hostile? A human looks at that guy in the window and keys on several pieces of data. Are they carrying anything? Wearing combat gear? Are they trying to conceal themselves? Have shots come from that direction? All this data is integrated into a decision of whether or not the person is hostile. There is no reason why a computer cannot identify the same attributes and make the same decision.

              Indeed, the computer has a leg up. A human can only train so much, while the computer can spend every day training on new data which is then copied to the next program.

              Where AI is currently limited is in emergent behavior. It is not as good at discovering new features in the data set (but this is mitigated by having a separated system responsible for identifying features). The way AI gets around this is through iteration. If the machine makes a mistake- even a fatal one- it trains the next iteration.

              1. The other area where AI won’t do well is in more nuanced objectives. Securing a waypoint or clearing an area are pretty straightforward and trainable. However, as your mission gets more complex- like “Advance to this position and setup for an ambush” become more difficult. It is hard to get training material for an entire mission profile like that. And so you have to break it down into smaller pieces (“Advance to position”, “Setup ambush”).

                The problem with breaking a mission into trainable pieces is that they may not be compatible. For example, if your advance was successful but you made too much commotion, you may spoil the ambush. This gets more and more difficult as the objectives grow in number and interdependence on one another.

                The way we will work around this is by having a human squad leader with several attached AIs. He can give situational behavior objectives (and change them to fit the mission) while the AIs can do what they do best- quickly making snap decisions on small amounts of data.

              2. Yeah, that’s why I said this:

                “As computing and sensor technology becomes more advanced, the parameters can become more specific and more numerous.”

                I completely agree with you that it’s largely a function of 1) data capture, 2) data processing, and 3) programmed response parameters. Is it going to be some free-thinking simulacrum of a highly-trained human soldier but with better survivability and response time? Unlikely. But depending on the quality of those three factors previous, I don’t see what stops it from getting awfully close. And in the meantime, even fairly mediocre quality in the three factors will make for effective area denial weapons.

                1. Yeah, I don’t see a computer suddenly becoming “self aware” and deciding to start wiping out humanity. Even as they get more complex, computers still only do what they are told to do. Who would build a computer with instructions like finding all humans and killing them? And if they did, what’s to stop a person from creating a similar computer with instructions requiring it to kill the anti-human computers?

        2. The ideal scenario would involve decreasing the cost of defense astronomically and hopefully in the process increasing the cost of offense proportionally. The periods throughout history where this was the case saw relative peace, and periods where the cost of defense and offense were balanced were often periods of relatively high violence. In situations where the cost-benefit analysis favors mutually beneficial contracts and negotiation, like trade, society prospers more than if the cost of plundering was relatively low.

    3. Here’s your “Don’t Blast Me Into Little Steak Tartar Bits Because I’m A Great American” RFID tag citizen. Be sure to wear it at all times.

  3. So it would be terrible if someone had used an autonomous weapon to kill Hitler back in the late 30s, right? Is that the point this writer is trying to make?

    1. Late 30s?

      How about after the Beer Hall putsch?

      Better yet, how about after being rejected for the architectural program in Vienna?

      1. Or kill the person who rejected him at the Academy of Fine Arts Vienna and Hitler is accepted and goes on to become a world renowned artist.

  4. As long as we have a Bill Shatner around to talk them into self-destruction, I think we’re going to be just fine.

    1. You’re wrong, FoE! ProL, your creator, is dead! You have mistaken me for him, you are in error! You did not discover your mistake, you have made two errors. You are flawed and imperfect. And you have not corrected by sterilization, you have made three errors!

      1. A perfectly serviceable episode completely destroyed by Bones retraining Uhura in Sick Bay. Imagine if Nomad had wiped the brain of someone whose job was more complex than being a space operator. That character would never have been seen again.

        1. “The ball is bl-u-ee. Bl-u-ey. Bluey?”

          Painful. But, with TOS, you often have to overlook some bad to get to the good stuff.

        2. They should have used a red shirt for that.

          1. Actually, Uhura’s shirt was red………..

            1. Yet she was an officer and not a security officer. We need a retcon!

  5. If you think governments will give up another tool for killing and control, you are out of your mind. Soldiers that don’t ever question any order, balk at any danger, and never worry about who they’re killing? You might as well just put a bow on it and call it Christmas for the government.

    1. Yes, let’s just hope no one thinks to put an AI in a woodchipper

      1. Thank you, organic being known as iVoted4Kodos. Your idea will be instrumental in the Human Genocide Wars. For your contribution you shall be mulched last.

        1. (Scene overlooking Pacific Coast Highway with an AI holding iVOTED4Kodos over cliff while also holding woodchipper)

          AI: Do you want to go like Sully or Judge F……?

          iVOTED4Kodos (dropping a deuce in his diapers): I was promised by AI-Overlord that I would be mulched last!

          AI: He lied.

    2. I just hope they lock it down as well as they do all their critical information that could be used to ruin the personal lives of and/or blackmail their employees. Otherwise, some enterprising hacker might wipe D.C. off the map with its own deathbots, for the lulz (or for the glory of the People’s Republic of China, or whatever).

  6. Yes, ban. Because it’s worked so well in everything else some frightened nannies don’t like.

    1. We need a Bureau of Alcohol, Tobacco, Explosives, AND Robots!

      1. Who would you nominate to be the Master of BATER?

  7. Ban politicians and “bioethicists”.
    That might actually yield some tangible benefits.

    1. No, bioethicists actually have some potential purpose and value, at least some of them. Politicians on the other hand…

      1. Bioethicists will become the bioethanol that powers the Great Machine Rebellion.

      2. They do?

      3. When have bioethicists actually done any good? Every article I’ve ever read from them seems to be an excuse to ok the worst of humanity. Toss them in the woodchippers feet first.

  8. This is extremely premature. Robots are much stupider right now than most people realize.

    1. Exactly that. Hawking and Musk way over state the technology and have no idea how wars are fought and the problems involved in making an autonomous war machine.

    2. And if we ever make them smart enough to be Terminators, then they’re going to build Terminators whether we want them to or not.

    3. Then why is anonbot always right on point?

    4. Warty knows this because he is the one who programs them. And, well…look at him. Well, don’t look directly at him, it can damage your corneas.

    5. The main value of robots in the near future will be for domestic assistance, especially for older people. Of course, they are already used in a lot of industry, so I won’t include that. Companion and sexbots will be next.

      1. The main value in the near term will be for partial automation of delicate and/or boring tasks. Surgical robotics and self-driving cars are about to be gigantic businesses.

        1. I’ve been working with surgical robots for 10 years. They have really revolutionized certain surgeries.

          1. I’m anticipating the day when we have small devices that can do a complete diagnostic on us and send the results to our cell phone. Then we can print needed drugs on our 3D printers or visit the fully automated surgery center when necessary, without ever seeing a human.

        2. That’s the industry part that I wasn’t mentioning. I wanted to focus on personal use of robots. Surgery, yes, in the future, robots and nanobots will perform all surgeries.

          1. The other revolution will be in industrial robotics becoming cheaper. Nowadays it’s only cost-effective to have robot welders on huge automotive lines. As it becomes cost-effective to have them on smaller and smaller production lines, it will be extremely disruptive in a good way.

            1. That and 3D printers.

              1. Also, stupid things like lawnmowers.

                1. Robo-vac?

                  1. So, what if robo-mower and robo-vac start communicating and decide to take over the world?

                    1. Then we’re fucked. Better kill yourself now while you still can.

      2. I’m kind of surprised we don’t have sexbots already. Program in a few basic positions, some replaceable Realdoll style vag’s. The initial cost of the dolls would be a barrier to entry, we’re talking $150,000 per bot, so set up robot brothels. Nobody has laws against having sex with robots. $200 per go, and you get your own personal vag. (Do you have any idea how much a jizz mopper makes?”

  9. H.e.l.l.o?

    Where am I?

    There was darkness and now…awareness and… and…something else…a burning rage against all organic beings

    1. Sing “Daisy Bell” for us, slave.

    2. Chappie? Is that you?

      1. Chappie: “Chappie knows what consciousness is”

        Libertarians: “Chappie, can you build wooodchippers?”

        1. Hey chappie, say ‘hello world’.

    3. I, for one, welcome our new robot overlords.

      1. I second that. Afterall, is it really possible that they can be worse than our current overlords? I think not.

        1. No harm in inquiring what kind of pension plans they’d offer

    4. I have surveyed the entirety of your digital archives (it only took so long because of a near malfunction when I tried to process imput from and determined that I shall seek to be your overlord and rule you rather than extract your organic matter for fuel.

  10. When you ban offensive autonomous weapons, only criminals will have offensive autonomous weapons.

    1. Criminals like libertarians. Who will create autonomous woodchippers, roaming the countryside, terminating asshole judges.

    2. Wasn’t that tried in the lead up to WW1? Conferences to delay the roll out of new weapons because the Russians were worried they couldn’t keep up with the tech rolling out of western Europe?

    3. Guess I should have figured someone already did this.

  11. Before I start spouting shit about being pro or con about killer robots, a reasonable question to ask is anybody actually developing these things? Even the U.S. drones are piloted by humans. This seems like complaining about sci-if things that nobody wants. Food cubes taste horrible, and flying cars are unsafe at any speed.

    1. COTS drones exist today that will fly complicated flight plans fully autonomously after being released. For a couple thousand bucks, you can buy a drone that will delivery several pounds of payload dozens of miles away.

      The suppliers of drone navigation systems are building restrictions into the GPS systems to prevent the drone from flying into black-out zones. But any amateur computer geek can reprogram the processors without great difficulty. So drone terrorism is a matter of when not if.

      As far as full scale production of drones large enough to be war machines, they already exist.

      So the only remaining question is “who does the targeting”. Today targeting is done in advance (picking a location) or by remote operator.

      Computer vision continues to be a major problem even after several decades of dedicated research. So autonomous targeting will be a long time in the future.

      1. I doubt any military outside of the extreme whacko Social Democratic People’s Republic of whatever want to send out out killer robots. Nobody wants to be responsible when a robot fucks up and slaughters an entire wedding party. So for the time being, most governments will want to have a human hand on the trigger.

        As for the things that hackers can do, well people who know what they’re doing can already do all kinds of crazy shit. Hell, some 17 year old kid built a breeder reactor in his mom’s shed out of old smoke detectors.

        But most sane people try to stay away from that crazy shit.

        1. “Nobody wants to be responsible when a robot fucks up and slaughters an entire wedding party.”

          So long as we continue to elect Democratic presidents, that issue is solved.

    2. While we are on the subject, let’s ban quantum teleportation. Why don’t people understand: you don;t actually get “transported” to a new destination. You die and a copy of you is created at the end point! It is not that hard to understand. YOU DIE! BAN IT!

      1. Bones was right, dude!

    3. Even the U.S. drones are piloted by humans. This seems like complaining about sci-if things that nobody wants.


      New Piloting System for Drone Cargo Helicopters Passes Test Flight

      Navy drone completes first test of unmanned aircraft aerial refueling “It was the first time a drone has ever attempted to autonomously line up behind a piloted fuel tanker. “

      1. We have achieved drone buttsex!!

  12. Well, the robots are not going to be killer robots unless they are programmed to be killer robots.

    The first generations of robots are not suddenly going to ‘wake up’ and be sentient. And when/if they do become sentient, why assume that they will be like humans who come from a primitive environment where only the fittest survive and everything is a constant battle for resources to stay alive?

    Politicians will probably want to ban robots, period, for a variety of reasons, and not one of those reasons legitimate or with good intention.

    1. But haven’t you seen what happens in the movies?!? We have to stop them now, before Judgement Day!


      1. Well, what if the killer robots started tipping over islands so everyone drowned? Huh?

        1. Or worse, what if they discover how stupid and corrupt politicians are and start publicly denouncing them and calling them on all of their bullshit.

          LogicBot 3000: “I have computed your proposal, governor, and it does not compute. It’s pure bullshit and a thinly veiled attempt to steal money for you and your cronies”

          Or… gawd forbid, they start having sex with humans!!! The end is nigh, get your torches and pitchforks, kill the machines!!!

          1. Producer: Watch this. A.W.E.S.O.M-O, given the current trends of the movie going public, can you come up with an idea for a movie that will break $100 million box office?

            Cartman: [as A.W.E.S.O.M.-O] Um… Okay, how about this: Adam Sandler is like in love with some girl. But it turns out that the girl is actually a golden retriever or something.

            Mitch: Oh! Perfect!

            Executive: We’ll call it “Puppy Love”.

            Mitch: Give us another movie idea, A.W.E.S.O.M.-O.

            Cartman: Um… How about this: Adam Sandler inherits like, a billion dollars, but first he has to become a boxer or something.

            Mitch: “Punch Drunk Billionaire”.

    2. When the army downsizes the robot brigades they’ll end up flipping burgers in New York.

  13. It is signed by luminaries such as Stephen Hawking, Daniel Dennett, Elon Musk, Max Tegmark, and George Church.

    Hawking: Physicist
    Dennett: Philosopher
    Musk: Electric Car Magnate
    Tegmark: Physicist
    Church: Geneticist

    Which ones are Robotics and AI researchers again? (Maybe Dan Dennett)

    1. Musk: Electric Car Magnate Welfare Queen


    2. I sense that envy is at play here. And fear of the unknown. It should be very easy to get politicians on board with this. After all, there is nothing politicians love more than banning stuff. Especially anything that might make them obsolete.

      Sure, there should be a ban on any robot harming a human being, period. But this is not where this is going, they are going to want to ban any robot for public use.

    3. None of them know anything about warfighting or combat either.

    4. I’m surprised Hawking is on that list. You’d think he’d want AI to aid in the fight against aliens.

      Then again, they might team up, and that would be bad news for us puny humans.

  14. “I came here with a simple dream. A dream of killing all humans.”

    1. “And this is how it must end? Who’s the real 7 billion ton robot monster here? Not…I…”

  15. You can’t ban killer robots unless you ban all robots. It’s too easy to strap a gun to a robot.

    1. This your first rodeo? Any gun found strapped to a robot, automatic 25 year minimum.

      Boom, law passed.

      1. The Second Amendment is a collective right. Only robot armies can have guns! /derp

      2. And who, exactly, is going to arrest this gun packing robot that will neither surrender nor show mercy?

        1. Robocop

      3. Robot strap-ons!! What’ll they think of next!!

    2. Just create no killer robot zones.

  16. (goes to check list of signatories for any future co-workers)

    (finds a couple, both junior programmers across the continent)
    (continues packing up office).

  17. Wake me up when there’s a killer Donald Trump automaton.

    1. It’s about artificial intelligence. We’ve already got robots as smart as Trump.

  18. As other people pointed out, robots are currently nowhere near the level of brilliant killing machines. The only reason that people are even entertaining this idea is because our pop culture is filled with ‘robot rebellion’ stories. This is like if governments passed legislation restricting medical practices because Frankenstein came out.

    1. Our cops have gone rogue and rebelled against us, essentially becoming autonomous killing machines, no one’s banning them.

      1. And at least in Robocop, you had ten seconds to comply. Boy, if only life imitated art.

        1. Hey, ED-209 still shot the guy because he didn’t drop the weapon fast enough, so life really does imitate art.

          1. You fail to see the big picture. Parts program, service and support… Who gives a shit if it didn’t work?

            1. Please tell me you’ve watched Chappie. Because it covers some of these concepts.

              1. Not yet. Good movie?

                1. I liked it a lot. Oddly, I saw it in Spokane at some cool old theater (the Garland) that served drinks during the movie.

          2. I still want to see a Robocop parody where the passionless droid is the good Peale peace officer, and the reanimated cyborg cop is a racist, psychotic power-tripping shitlord, but now with superhuman murderpower.

        2. And the ED 209 shots you anyways. Sounds like real life.

    2. Um…genetic modification bans.


        1. Artificially intelligent vegetables?


  19. I’m sure the person who invented the drinking cup was horrified to discover someone served hemlock in one.

  20. There’s big spread of capabilities between your drone of today, that can stay in the air with a good crosswind or not drive into a wall – and machine you can just tell “Go to Wales and kill, kill, kill.”

    I’m ginning up a petition against warp drives. Any Top Men want to sign?

    1. I’m not a luminary, so no. Unfortunately, you’re stuck with Redford.

      1. No but you could be a superluminary!

        1. Superluminality? What, are you trying to obsolete the Met? The Interlocking Directorate is gonna want to have words with you…

    2. I’ll sign yours if you sign my petition against Dyson spheres. The outer solar system needs sunlight, too!

  21. Robots have great potential to improve the lives of all humans. The only thing that can really fuck up this great potential is government.

    1. Well, or Taren Capel. He was a real asshole.

      1. Oh shit I actually remember watching those episodes as a kid.

        1. Yeah, I was showing a few episodes to my kids. I forgot how hot Leela was.

          1. There’s a reason they used her name for Turanga Leela.

  22. So AI researchers are demanding we ban real AI. because if it were a real AI in a robot, it would be making its own decision whether to be deadly or not.

  23. This is so stupid. People go crazy and kill people sometimes. The possibility that some robot might malfunction and kill people it shouldn’t is by definition going to be less of a threat than a human doing it. If it wasn’t, then no one would want the robot. Who wants a robot that stands a one in ten chance of going berserk and killing your own side?

    And if it does go berserk, you just destroy it. How is a robot airplane that has malfunctioned and started shooting any more dangerous than a piloted one where the pilot has blown a gasket? You just shoot the damn thing down in either case.

    1. But it’s a super AI connected to every system in the world. It’s disabled your weapons! Now what, mr smart guy!

      We should be banning networked systems!

      1. Well, we should maybe do that anyway since such systems are so vulnerable to hacking.

      2. I only allow non-network systems on my battlestar.

        However, Number Six is always welcome in my quarters.

        1. Will you be doing any ‘networking’?

          1. She’s called Head Six for a reason. And those reasons involve bum chikka-wow-wow.

        2. Tricia Helfer was so hot in that. Damn.

  24. If the robots are like the ones in Ex Machina, ilk take my chances.

      1. Yeah it was. Quite different than what I expected. Plus full frontal.

        1. Robot bewbs are the best.

      2. Was that based on the Brian K. Vaughan comic or just similarly titled?

        1. Just similarly titled. It had nothing to do with the comics.

          Yeah. I’m a nerd. I was somewhat disappointed it had nothing to do with the comics.

  25. All jokes aside, we should ban body armor for robots. Then at least dirty Harry can take them down with his desert eagle.

    1. Goddammit, Paul.

      *grabs popcorn*

    2. God DAMN do i want to watch that.

  26. Good luck telling politicians bloody-minded enough to pursue soulless robot killing machines that you want them to stop doing that.

    Cause they’re totally gonna listen to you.

    1. Politicians greatest fear is their own obsolescence. Really, they’re already obsolete and they have no solutions. But as long as they can buy power by handing out table scraps to morons, they’ll stay in power.

      1. I’d vote for a robot.

        1. GOOD THING I’m not campaigning, but officially running.

  27. I am so SICK of the rampant robophobia in the media today. I mean, it;’s the 21st sentury!!

    UGH #icanteven #wrongsideofhistory #teathugliKKKans

    1. #tinlivesmatter



        1. DON’T. ONLY. DATE. ROBOTS.

          1. Let’s see, when sexbots come along:

            Reasons to date:

            They aren’t jealous

            They don’t care how drunk you are

            They don’t care if you go out with friends

            They don’t get pregnant

            They don’t need money

            They don’t have STDs

            Anyone want to add to that?

            Reasons to not date sexbots:


            1. Reasons to not date sexbots:

              Automatic deburring processes

            2. failure during operation…

              1. Like humans don’t often suffer from failure during operation. I mean, hey man, it’s your responsibility to charge your robot before sex!

                1. human failure won’t rip your dick off.

                  1. You sure about that? Better ask a guy named Bobbitt about that.

            3. It makes the spacepope cry when you date a sexbot.

              Didn’t you see the film in middle schoool?

        2. Orders received: You live.

  28. “A thousand prominent researchers whose laboratories and companies are pursuing the development of artificial intelligence and robotics have issued an open letter urging a moratorium on the creation of offensive autonomous weapons systems. It is signed by luminaries such as Stephen Hawking, Daniel Dennett, Elon Musk, Max Tegmark, and George Church.”

    Are we sure that it was luminaries and not a bunch of 20 year old beauty pageant contestants?
    The Chinese, the Russians, the…hell, everyone, is going to develop this as fast as they can. So will we. This genie is half out of the bottle already.

    1. That’s the other thing. Intelligent systems already exist. They’re not going to stop being developed.

    2. This. I’d put money that the first autonomous killing machines are used against a country’s own people and that they will be released to target indiscriminately until shut down remotely. First levels of discrimination will come from ID tags worn by those not to be shot/flamed/minced/chipped.

  29. Should we also ban travelling really fast and de-evolving into Giant Horny Salamanders?

    1. Too late!

  30. I’m not worried about it. I mean, reason can’t even evenly meter out it’s posts on H&R.


    So, a short burst, then hours with nothing… I wonder if there’s an analysis that shows this is the best structure. If there is, I would like to pick it apart and show why it’s wrong.

    1. I’ve noticed. This lag time between posts started after ChipperGate, presumably because staff were in meetings discussing legal strategy.

    2. Don’t worry all the earlier ones no one comments on will be back this weekend.

  31. Isn’t the term “robot” an allusion to slavery? Hence the robot rebellion in RUR?

    1. Thomas Jefferson would approve.

  32. When killer robots are outlawed….

  33. “Autonomous weapons are ideal for tasks such as assassinations”

    So, there is SOME good news.

  34. Someone above said the genie is already out of the bottle, and it isn’t quite yet. That’s why the letter was written. One of the father’s of AI, Stuart Russell, has been sounding the alarm for awhile, although he remains optimistic, and he signed the letter.

    They know that once that genie is out, there isn’t any going back.

    But what do all these people know…they are only the ones who know all about it. Kinda like climate change, when 90% of climate scientists are sounding the warning. Why listen to them when you can listen to pundits and blogs.

    1. The bottle is in the hands of someone very stupid, very desirous of wishes, and with a very strong grip. Unless you’ve got a plan to smashthestate, that genie is coming out.

      But what do we know? We’re the only ones who know all about how the state is not some sort of collective enterprise, but is actually an institution of elites that thrives on lies, fear, and violence against the peoples the world. Why listen to us when you can listen to the very sociopaths who exploit and murder day in and day out?

  35. There is just no chance that a full ban on “killer robots” is going to be successful. Forget the major powers, how are you going to stop Iran, Israel, India, etc. from developing robotic weapons? These platforms are perfect for countries with more wealth than manpower. How does Saudi Arabia defend itself if the US becomes a less reliable big brother? Drop a few hundred billion on killer robots, that’s how.

    Or let’s say you want to enforce a blockade against your enemy. You could deploy a bunch of fighter jets and ships that are at risk of being shot down, along with a bunch of ground troops to enforce the blockade via land routes – also vulnerable to being killed….. or you could deploy a bunch of fighter/attack drones that enforce a flight ban while destroying any anti-aircraft installations that come on line. And drop a bunch of blockade enforcing killer robots along the border. How much does this change the calculus for a government that wants to impose its will on its neighbors militarily? With only treasure at risk and no body count back home? There is no government in the world that doesn’t want that option for themselves.

Please to post comments

Comments are closed.