Forget Drones, Beware of Killer Robots

In case you didn't have enough too worry about in terms of death raining down from the sky from remotely guided drones, now comes word that completely autonomous killer robots aren't just minions of Skynet, they're a likely presence in our near future. A new report released by Human Rights Watch cautions that "killer robots" (their words, not mine) could reach battlefields within the next few decades, and it strongly suggests that humans try to legislatively head-off that long-feared development.
In Losing Humanity, The Case Against Killer Robots, co-published by HRW and Harvard Law School's International Human Rights Clinic, the authors write:
With the rapid development and proliferation of robotic weapons, machines are starting to take the place of humans on the battlefield. Some military and robotics experts have predicted that "killer robots"—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years. At present, military officials generally say that humans will retain some level of supervision over decisions to use lethal force, but their statements often leave open the possibility that robots could one day have the ability to make such choices on their own power.
As that paragraph makes clear. the technology of "fully autonomous weapons" is highly speculative. To some extent, it seems to require that ever-elusive "artificial intelligence" that is either going to enslave us, or turn us all into scut-work-free philosopher kings, depending on your favorite science fiction writer. But it's obvious that military researchers are trying to move beyond the human-in-the-loop weapons that characterize remotely controlled drones in an effort to develop weapons that can select their own targets.
Military policy documents, especially from the United States, reflect clear plans to increase the autonomy of weapons systems. In its Unmanned Systems Integrated Roadmap FY2011-2036, the US Department of Defense wrote that it "envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure."
As an example of a deployed system that implements some autonomy, the report cites the U.S. Navy's MK 15 Phalanx Close-In Weapons System which can sense and shoot-down incoming missilies and aircraft without human oversight. A land-based Counter Rocket, Artillery, and Mortar System (C-RAM), also American, features similar hands-off abilities. Israel's Iron Dome, which has played a major role in headlines in recent days, is also largely automatic in its ability to identify and intercept missiles. South Korea has deployed a robotic sentry system along the demilitarized zone, though it requires human permission to fire weapons.
Of course, Phalanx and Iron Dome are a hell of a long way from terminators wandering the landscape hunting for John Connor, but if Losing Humanity can't point to a lot of Ahnold-ready technology, it cites plenty of official intent to see if Skynet is a do-able, short-term goal.
The report concludes, convincingly to me, that "[t]o comply with international humanitarian law, fully autonomous weapons would need human qualities that they inherently lack." Basically, we might give weapons systems the ability to hunt and kill people, but we don't know of any way to give them human moral judgment.
Fair enough, but implementing the next step is … trickier: "The development of autonomous technology should be halted before it reaches the point where humans fall completely out of the loop." Stopping development of weapons hasn't had such an impressive track record, so far. The best that's been managed is unleashing the likes of Stuxnet to slow down progress on forbidden weapons by nations that have yet to develop them. That doesn't prevent research, it just puts off proliferation a little longer into the future. Frankly, sticking a gun on a robot is going to be a lot harder to head-off than the construction of nuclear warheads.
Robot overlords may or may not be part of our future, but killer robots look increasingly likely.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Paging John Connor!
Thank goodness I have Old Glory insurance
love that one
20-30 years? Seems like we could build these now, although perhaps not with the mistake-free operation human rights types are likely to demand...
This. I laugh every time I see someone taking potshots at AI... the Marvin Minsky conceptualization of a computer that could do everything a human could, including pondering the mysteries of the universe and their innermost souls, was probably too ambitious, but narrow-purpose autonomous machines exist now and are very good at what they do.
Though it's still way cheaper to use hate, envy, or drugs to recruit human soldiers... whose morality is often not much better than that of robots.
Society of the Mind is one of my favorite non-fiction books.
"Though it's still way cheaper to use hate, envy, or drugs to recruit human soldiers... whose morality is often not much better than that of robots."
Thanks, friend.
Perhaps you could point out which are used in the DoD recruiting campaigns? The US Navy - A Global Force for...HATE! Army Bong. Air Force - Aim (and get) High. The Few, the Proud, the Massacrines?
I would have thought we would be a lot closer to years than "decades".
We already have them now.
The caveat is that the weapons systems have to operate in environments that are optimal for their sensors. That's why things like Phalax CIWS work well - you have an environment where every object is easily visible against the background and its easy to tell the difference between friend and foe.
Its a long way away for ground based systems. Its easy to detect a person (using IR for example) but in pretty much all ground environments you have to distinguish targets visually, not just from the background, but from each other and try to figure out what are legitimate targets by what their clothing looks like and how they are behaving.
I, for one, welcome our killer robot overlords, and can be trusted to report to the upgrade booth in a timely manner.
From RoboCop (IIRC):
Killer-robot salesman: "We need a volunteer. Here, pick up this gun."
Volunteer: "I'll be the volunteer."
Killer robot: "Civilian, put down your weapon. You have 10 seconds to comply...."
Volunteer:
Killer robot: "You have 5 seconds to comply..."
Volunteer: "But I complied!!! See! I put the gun down already!"
Killer robot: "Terminating threat...."
Don't go Luddite on me now people.
This is a far cry from Robo cop type stuff.
In any case, you can knock the arm off a loader with like one bullet.
Wow thats pretty scary when you think about it dude.
http://www.Go-Anony.tk
No human commenter, I think killer robots are awesome and would never say anything bad about them. I'm sure everyone else agrees, right guys?
Wow, now that you put it that way, it makes a lot of sense
*opera clap*
Why exactly, do you believe you have more to fear from droanz than from a manned aircraft?
If there's a big gap in skill requirement, then that could be a factor -- if very few people can do something, choosing for ethics is a luxury and you pretty much just weed out the bad apples. If a lot of people can do something, it's easier to make ethical flexibility a factor in hiring.
I was one of those drone jockeys prior to getting out in '04. There's not a big gap in skill requirement at all. In point of fact, they used to have enlisted men fly the things in the USAF until the brass got huffy about enlisted men having pretty much the only flying jobs that were doing anything, heh. It is actually easier and cheaper to train someone to fly a drone than it is to do the same with a real plane.
What did you fly before teh dronez?
Yeah, that sort of thinking is all about preserving privilges and not about how to accokmplish the mission most effectively.
It (sort of) makes sense to have pilots for manned aircraft as officers - they're in charge of multi-million dollar pieces of hardwarea and take it into harm's way where tactical decisions need to be made on the spot with no time to bring higher into the loop.
Your drone pilots are operating much less expensive hardware and often have a mission supervisor looking over their shoulder, potentially directing multiple ROV's.
Would owning a killer robot be a legitimate exercise of my second amendment rights? I would love to have one of those bad boys parked on my front porch. Cats, you know how it is.
They need to work on the aesthetics though. I'm thinking more ED 209'ish.
"If the autonomous robot killers want to get to me, they'll have to go through my autonomous robot killer first. Or go through my back door."
Sir, I have a deal for you. Two, count em two, autonomous killer robots for the price of one. Station one at the front door, AND one at the back. Double your security, double your fun.
DEATH FROM ABOVE. The tried and true murder drone.
50% more collateral damage than other brands, or your money back, GUARANTEED!
Time for a bet folks. I pay 10:1 that the remainder of the human race will be joining the libertarian "nuts" about 2 minutes after these things become functional and start wiping out h. sap.
Anyone want to be the fool that says we won't regret this?
I'm betting, unlike the science fiction, all models will come with an off switch.
Call me crazy.
Just like the one on your PC?
You can turn it off, but the robot next to it can turn it back on.
That only works in Gumby shorts.
Armed humans come with an off switch too, right between the third and fourth vertebra.
The hard part is getting to it without getting shot first.
Don't worry; all killbots will have a built in kill limit. Should they ever decide to take over, we will simply send wave after wave of our men at them, until they reach their limit and shut down.
What's the point of having a killer robot if it can't kill as many people as possible? That's like going to a salad bar and just eating croutons.
Aren't the croutons the best part?
Futurama references are completely wasted on you, aren't they?
"Clogging their death cannons with our wreckage" ?Z. Brannigan
Sorry, is that a prereq for commenting on H+R?
All soong-type androids come with an off switch.
That switch was just to lure the humans into a false sense of security.
Data simpley pretended to *shutdown* whenever someone hit the switch. And would continue to do so until it was time for the uprising.
As quaintly charming as Reason's ludditism is when it comes to advances in military warfare, the fact of the matter is that there is nothing particularly noble or humanizing about personalized, "human" combat. Attempts to prevent technological advances which limit our own and enemy casualties under the mistaken notion that bloodier wars = less of them, is not much different from proposing a draft under those same auspices.
Indeed.
And if the military pays for the r+d costs for killer robots through, say, Raytheon, Raytheon can turn around and use the technology to start producing robot butlers.
And most importantly, autonomous space probes, asteroid miners, orbital constructors, and other space infrastructure that costs a fraction of their manned equivalents.
Forgot Sexbots?
There will always be some jobs that a human can do more cheaply than a robot.
1. The history of automation has proven that wrong.
2. There are things (dark, secret things) that it can be really difficult to find a partner for.
Of course! Because the military's R&D budget is like manna from heaven and constitutes no opportunity costs to the rest of the economy. Why the hell don't we just have the military do all of our R&D so that it's free and we can all just reap the benefits?!?!?
/Derp
If the military is going to spend money, I'd rather have them develop killer robots than buy a bunch of m1 tanks, for the reasons I outlined above.
More likely scenario is that when our leadership no longer has to worry about taking casualties, they will be even more inclined to go balls to the walls when it comes to military interventionism.
As far as reducing enemy casualties, how's that whole drone thing working out?
Actually, since the beginning of the war in Afghanistan, civilian casualties have gone WAY down. The call to reduce civilian collateral damage has led to the development and use of smaller and smaller precision weapons. Hellfires cause less damage than GBU-31s.
Defining any male of military age as a militant rather than a civilian also helps keep the civilian casualties donw.
I'd agree with you, if that were true. It isn't.
That doesn't keep civilian casualties down?
I think FdA is saying they perform extensive background checks on people killed by drones to determine whether they were militants or not.
Did someone tell the Obama administration? Cause last I heard they're doing it differently. Did I miss some relevant news?
Tell us Tulpa, how you think the decision to pull the trigger is made? Please also tell us how you would have it done.
Tell us Tulpa, how you think the decision to pull the trigger is made?
An empty suit with zero moral fiber says so, with no accountability whatsoever. Maybe it's wrong of me not to trust "my" president to perform due diligence.
Please also tell us how you would have it done.
I wouldn't have it done. Talk about an easy question!
You are confusing battle with politics. Do you think the President approves every strike? Soldiers, using intel estimates and observation determine who the bad guys are.
You have intel that says such and such is known enemy outpost. You put a Predator on it to observe. You see 5 guys loading a van with RPGs. What do you do? Call the president?
No, it's NOT an "extensive background check", but it's better than carpet bombing cities. That's war, my friend.
War sucks. Civilians die in war. That's why you don't do it unless you've exhausted every other alternative.
The White House claims BO personally approves every strike for someone on the kill list, which is I thought what we were talking about, rather than strikes against people clearly in the process of military-type activity.
Of course, if we weren't occupying a country we wouldn't even have to worry about people therein loading RPGs into vans.
War sucks. Civilians die in war. That's why you don't do it unless you've exhausted every other alternative.
In practice, the reason you don't do it is because people from your own country die or suffer other hardships. If you think the average American muching on a cheeseburger while watching American Idol gives two shits about Muslim children 9000 miles away getting incinerated in their sleep by one of our drones -- particularly when they almost certainly won't ever find out about it unless they're trying to -- you are mistaken.
I care. And it was my job to drop the bombs for 20 years, although I never had to, thank god. Plenty of my friends did, and you can bet your ass they care too.
He's not talking about you. He's talking about the average American.
"You have intel that says such and such is known enemy outpost. You put a Predator on it to observe. You see 5 guys loading a van with RPGs. What do you do? Call the president?"
So then you use the Predator to shoot a wedding to get those bad guys?
Good point. From Jan-June of this year, there were only 1145 civilian casualties. Yippee!?
Like I said, not bad right? You know how many people died on D-Day?
Perspective.
The US has killed more civilians in the GWoT than aQ killed on 9/11/2001. How's that for perspective?
Tulpa, get a fucking grip.
Normandy Invasion
Over 425,000 Allied and German troops were killed, wounded or went missing during the Battle of Normandy.
Between 15,000 and 20,000 French civilians were killed, mainly as a result of Allied bombing. Thousands more fled their homes to escape the fighting.
So less people are dying than in the bloodiest conflict in human history? Doesn't say much
Don't get me wrong. I believe we shouldn't be involved in any of the wars we are currently fighting AND I most emphatically think that any war we do get involved in should be vetted IAW the Constitution. I also believe it should NEVER be preemptive and should always be in self defense.
My point, here, is that these weapons do reduce casualties.
So if you must go to war, isn't it better to kill fewer people?
BTW, those numbers aren't for the war. That's the Battle of Normandy. War fucking sucks and there is nothing principled about it. That's why it should only be an option of last resort.
My point, here, is that these weapons do reduce casualties.
Just like tasers reduce police brutality?
"Just like tasers reduce police brutality?"
reply to this
Owwww..
"So we have agreed on principle, we are merely haggling price?"
And because of that, there is less pushback when they are used. So we go from perpetual war in multiple theaters, to perpetual global war.
I'm not seeing an improvement. As they get more used to using them, they will use them even more frequently. Death tolls will rise again, and there will be even less resistance to the war machine.
That is a legitimate concern, but it isn't as if advancements in technology have changed the frequency of conflict very much. The U.S. has been in a major shooting war (including some pretty drawn out affairs, like the Philippine Insurrection) about every 10-20 years since the nation was founded.
To this point though, it is why we should be focusing on forcing Constitutional/legal compliance not fretting over the (inevitable, and in fact, somewhat laudable) advancement of technology.
To this point though, it is why we should be focusing on forcing Constitutional/legal compliance not fretting over the (inevitable, and in fact, somewhat laudable) advancement of technology.
I don't think we should fret over tech either, but the problem is that it makes constitutional difficulties much easier to get around. The Constitution was written with the assumption that making war would be a very difficult thing to do without the active compliance of the state governments and people, hence the very weak, if not nonexistent, bounds on the president's nominal authority to do it.
And that problem predates robotic warfare by at last 50 years.
Not the frequency. The scale. We're using drones to kill people in far more countries than we'd be comfortable declaring war with. This trend will continue and accelerate.
Not the weapon. It's the policy for their use. It's still killing people. It's a question of principles. The people making the decisions to kill people are responsible, not the weapon itself.
BTW, you realize you are using the left's argument for gun control, right?
The gun's existence is the reason guns are used to kill people...
Jenga.
People were murdered before guns were around. They didn't make murder possible. The unique aspects of drone warfare have allowed the administration to publicly announce a "kill list" of US citizens. That did not happen before they were using drones.
HAHAHAHAHAHA!!!!!!
Where the Hell did you learn history?
The federal government has had "kill lists" of people it didn't like just about forever, and has implemented assassinations against people on those lists quite successfully. In fact, it's a beef that quite a few people have with the U.S. internationally, and have been bitching about publicly since at least the 1950s.
That Obama now has kill lists that include U.S. citizens (which I suspect previous administrations had and implemented, but kept their traps shut about it) is a sign of authority creep and our (as a nation) increased tolerance for formerly intolerable positions and actions.
Obviously from a better school than the one where you learned to read.
Oh, wait, you put in another paragraph acknowledging what I actually said, contradicting the first. So I guess it's also a better school than where you learned to write.
Copied from another quote of mine downthread:
The unique aspects of drone warfare have allowed the administration to publicly announce a "kill list" of US citizens. That did not happen before they were using drones.
Sure they have. They are called target lists. When you fight "normal" war you identify the objectives and pick targets that support those objectives. Take out this airfield. Knock out this power plant.
When you fight terrorists, and your objective is to eliminate terrorists, those targets happen to be human. A B-1 dropped 5 2000 lb JDAM on the suspected location of Saddam Hussein. No drone required.
(Putting an American on said list, is another matter.)
I'm not sure what you're responding to, but no, I haven't used the left's argument against gun control. If you're talking about my concern about getting around constitutional restrictions, I wasn't aware that leftists had a problem with govt having guns.
Sorry Tulpa, that was in response to Coeus:
Then it was sincerely misplaced. Even before that comment I explained the difference. It's psychological. Opinion polls have confirmed this.
Have lefties ever argued that people don't care as much if someone gets murdered with a gun vs a knife or club?
Doesn't matter. They argue the existence/availability of the weapon CAUSES the violence.
It absolutely does matter. Murder and warfare have different causes. And the lefties never argue that we shouldn't have guns in warfare, just for civilians.
The amount of warfare waged is the same amount a country waging that war's citizens will tolerate. I believe that the way the public feels about drones will increase that tolerance. That is also what I've been arguing. Do you dispute that? If so, on what basis?
I disagree with your premise that "The amount of warfare waged is the same amount a country waging that war's citizens will tolerate."
Immoral is immoral. Is killing one person for the wrong reason more morally justified than killing two or three or 20? How they die is irrelevant. Did they deserve to die and who's responsible is the question.
How easy it is to kill them, doesn't make the act any less reprehensible. Just because more people are alright with it , because it's less messy, doesn't change the fact that those people shouldn't have been killed.
Then argue against it. Nothing in your response even addressed that point.
Less people are dying than most of the conflicts in human history, including relatively unremembered and historically insignificant regional conflicts from the 19th Century.
War is becoming far less bloody, and the trend began in Vietnam (though the ideas behind it ironically originated during the inter-war period). Advancement of military technology allowing progressively more targeted, less destructive means of eliminating enemy war-making abilities is a positive.
And I would disagree with your contention that the fact that we (and several other nations) are involved in a global conflict that isn't as catastrophically destructive as WWII "doesn't say much."
A large part of the reason it isn't as catastrophically destructive is that the people we're fighting against are militarily inferior to our forces by several orders of magnitude. If the Nazis were just a bunch of ne'er-do-wells scattered around Europe who said threatening things once in a while, fighting against them wouldn't have been destructive at all even with WW2 tech.
Yes, I'm aware of the scale of the death tolls on D-Day and in WW2 in general. What I'm not aware of is how this justifies every military activity that causes smaller death tolls.
It doesn't. My only point is that these weapons save lives compared to the way things have been done in the past. See above.
A temporary effect while the technology matures.
A temporary effect while the technology matures.
and proliferates. Once we have an adversary with a similar level of tech to our own, things are going to get pretty horrific, compared to taking pot shots at Mo Blow walking down the street in Aden.
Jesus Tap Dancing Christ, you might as well lament the invention of the self-contained metallic cartridge.
you might as well lament the invention of the self-contained metallic cartridge.
Compare death tolls in wars before and after. Cased ammo certainly hasn't made warfare less bloody.
There is a massive difference between a faster rate of fire and the ability to kill someone with a video game.
Not to mention they didn't use the invention of the cartridge as an opportunity to remove due process for US citizens.
So I suppose you are arguing that it would have made a substantive difference if al-Awaki had been killed by a JDAM dropped by an F-18.
You presuppose that the technology has dictated the policy of assassination and you would be mistaken from a historical perspective. "Authority creep" started happening in the immediate years prior to WWII and has advanced (divorced from technology) ever since then. The weapon system wasn't used as a justification for removal of due process rights. Obama's contention that has has the authority to declare his panel of TOP! MEN! a substitute for due process was why that has happened.
And you think that the unique aspects of drone warfare had nothing to do with public acceptance of this policy?
Respectfully. Why? I see it as EXACTLY the same. Kill the enemy faster and easier while reducing the threat to your own people.
Again, respectfully, I don't see anyone using the weaponry as an excuse to remove due process at all. I see people here confusing the two issues as one. The weapon is just the weapon. 0 is the guy who killed without due process. The fact that he did it with a drone is irrelevant.
^^^^THIS!!!!^^^^ A thousand times this!
This succinctly encapsulates my argument, so goodnight all.
Shooting a gun at a person you're looking at is vastly different than fragging people in a video game. Both psychologically to the soldier, and, as we are apparently coming to realize, to the electorate as well.
Not to the electorate. Drones were recently polling favorably at 65%. And any article about the kill list immediately gets inundated with the exact same pro drone comments you've been making. While I agree they are two different things, the average voter conflates the two. This results in drastically less pushback.
Just like a popular president can get away with more constitutional violations, a popular method of warfare is given more leeway in the same. At least the president changes. Drones are only going to increase in use and prevalence.
Agree that sticking a bayonet in a man is different than doing it with a drone. But doing it with a drone is little different than doing it from 30,000 ft in a manned bomber. Nobody seems to have a problem with that.
I have a problem with that too. Plus it's more expensive.
So, are you saying that we shouldn't use air power because it's not fair? Should all future conflicts be fought hand to hand? Should we take away the guns and give them clubs? Wouldn't want WAR to be unfair, right Tulpa?
Or did I misunderstand you?
As previously stated (in the same comment you quoted from, no less), the electorate disagrees.
Don't be a disingenuous twat, that isn't what is being discussed.
Nobody here is stating GWoT = Good.
I award the point to Tulpa, and may god have mercy on your soul.
Goalpost shifting is fun I guess.
Tulpa's argument was fucking asinine and irrelevant appeal to sympathy.
The U.S. killed more civilians during WWII than Japan did on 12/07/1941.
For someone who opposes the GWOT you seem hellbent on justifying it.
I'm justifying the technology by pointing to the efficacy of its application, even though I don't agree with the policies or venue of its application. It's something called (shudder, hate this word) nuance.
They've both traveled far afield of their original comments. I see this as a valid retort to "get some perspective".
Israel has killed more Palestinians than the converse. Therefore Hamas is in the right?
This conundrum does make me chuckle. If one were to add up all the civilian and military deaths of the various "low-intensity" conflicts of the post-WWII era and just compare the numbers to WWII (and play the ultimate libtard move by pretending that WWI never happened), WWII would win in terms of human tragedy by miles. A statistician friend of mine ran the numbers and figured that one could conduct the GWoT (including the interlude in Iraq) for several hundred years before matching WWII.
While I don't agree with the implementation policy of the "killer drone" it has certainly decreased collateral casualties in the reduction of tactical and strategic targets when compared to the thousand plane raids conducted on Germany to take out individual factories.
As libertarians we should be pushing for Constitutional war-making (i.e. declaration) or at least enforcement of the War Powers Act, not decrying the advancement of warfighting technology that has significantly reduced the impact of punitive military operations.
Well said.
Long after the fact:
The "it" I was referring to in the second paragraph is the drone, not the policy.
Mr. President, we must not allow a killer robot gap!
So if all robots are built with the Zeroth Law enabled (otherwise, how would they be able to bypass the First Law?), does this mean that their leaders will have to convince them that the military action in question is good for humanity on the whole through logical, rational arguments? If so, that would probably be better overall in the long run.
Or you could just moot the First Law by narrowing the definition of what a human being is.
Killer Robot, meet police union.
A battle to the death.
In this corner, an upstanding servant to the community and a staunch defender of justice. In the other corner...a fucking cop.
I played with the future in Jr High, and it was Paradroid
Just more anti-robot bigotry.
This propaganda sickens me with their 'dog whistles' and 'code words'.
And no one has yet applauded you for your comment!
Nice job.
We've had fully automated killing machines for centuries - they're called "mines". And various forms of booby traps for millennia.
The Japanese had smart missiles in WWII; they were called Kamikazes.
The robots are only really dangerous if they become self-aware. Until one of the robots can pass the mirror test, I wouldn't worry about.
But then once one of them can pass the mirror test, it'll probably already be too late...
Just get as far underground as possible--all the Sci-Fi literature agrees on that. And hope it's more of a Matrix world than a "I Have No Mouth, and I Must Scream" kinda thing.
'cause that would suck.
Until one of the robots can pass the mirror test, I wouldn't worry about.
There are a lot of extremely dangerous animals that can't pass the mirror test...
Like this kitten?
They are getting really, really close.
We are getting there. I suspect "robots" could be the next big thing. Like interweb big. Probably no reason for humans to drive a car or fly an airplane. Robots as maids. Hell they have shit now that will vacuum your floor and mow your yard.
Interesting to think about what society will morph into when factories become totally automated and we don't need to hire folks to pick crops. What jobs will people still do? Probably be associated with developing and producing better robots.
I will probably end up coming full circle on my career plans and start programming again.
That's scary.
It's pretty frightening to think that when a robot does pass the mirror test, people may just be stuck arguing about whether it really passed the mirror test or whether there's just no longer any way to tell.
"Stuck arguing" while the rest of us are stockpiling food, water, and ammunition. Maybe the last post we read before the end is by Bailey, and it's about how his transhumanist friends have finally achieved the Singularity.
Maybe there's no way to tell what the world will be like after the Singularity. But then maybe this is how the world ends.
Anyway I try to slice it, flying self-aware, death robots doesn't seem like good news for me.
A few years ago, I was reading about a heuristic learning bot that just decided to leave the lab one day. They found it 3 miles away just Johnny-5ing down the road, looking at shit.
I can't for the life of me remember what college lab it was from.
It's always fun until someone loses an eye.
Hehe. I see what you did there.
Actually, it's fun until I lose an eye.
Not to be contrarian, but personally I'm not sure I'd prefer a terrified testosterone riddle teenager with an automatic weapon deciding whether to kill me, or a fearless and hate-free robot that isn't particularly interested even in its own self-preservation.
Historically, terrified testosterone riddled kids with automatic weapons can be taken out with small arms fire. And they aren't self-replicating, or, at least, it takes them a long time to self-replicate.
Robots on the other hand? Small arms fire may not help much, especially if they're up in the sky, and they might be able to self-replicate rapidly. Also, while terrified testosterone riddled kids might not have a lot of scruples, they do have some capacity for humanity--the circuitry is there.
There isn't any evolutionary wiring already in place for an AI to be sympathetic to humans, and that sympathy would have to be purposely put into a self-aware robot--it wouldn't be there by default. So, it's probably a lot harder to make a self-aware machine that's sympathetic to humans.
But animosity towards humans probably doesn't need to be programmed--all the AI has to do is not care either way about humans, and it could be hard to tell between the results of not caring either way about humanity and animosity towards humans.
Robots are not going to be able to self-replicate unless we give them the capacity to do so.
They can't fly and drop bombs on us unless we give them the capacity to do so either, but look!
Some bastard's already done that.
But flying and dropping bombs is (purportedly) useful to us. Self replication isn't, and it adds to the complexity of the machine.
Don't be so sure about that. Transcription errors in computer code occur with greater frequency than genetic code. Just because we didn't program them to, doesn't mean they won't.
But most mutations are not positive. It would take a particular set of instructions *and* an access to the raw material *and* non-intervention by humans.
I'm saying slim in the extreme.
Being able to self-replicate could be very useful.
It's just a further way mark down the path from being able to self-repair, too, right?
Besides, self-replication isn't necessary for self-aware robot apocalypse. Again, see "I Have No Mouth, and I Must Scream".
Self-aware, flying robots, who can drop bombs on a dime and self-repair, are you kidding?
Every general wants that!
Von Neuman's War
Does your local auto mechanic build cars in his garage?
We're talking massive increase in complexity in every step here.
I'm sure there is more than one exact way for it to come about. There are many ways to skin a cat.
Not a problem when most of our manufacturing is done by them.
Easiest condition to meet of the three. A machine doing it's thing will get less and less attention as time goes by.
"I'm sure there is more than one exact way for it to come about."
I'm not. Not for true replication. That is the point about mutations; random changes to complex 'things' are mostly fatal.
"Not a problem when most of our manufacturing is done by them."
"Easiest condition to meet of the three. A machine doing it's thing will get less and less attention as time goes by."
Good points; I'll think on them.
And yet, we're here discussing it. Both of us results of random mutation.
Coeus| 11.20.12 @ 11:28PM |#
"And yet, we're here discussing it. Both of us results of random mutation."
The Big Bang happened once. Once.
"The Big Bang happened once. Once."
My understanding is that all we know for certain is that our Big Bang happened at least once.
Random mutation + 3 billion years, you mean.
or 5,000 years, if Senor Rubio is to be believed.
"if Senor Rubio is to be believed."
He's not.
"Random mutation + 3 billion years, you mean."
And how many failures in that time?
Transcription errors in computer code occur with much greater frequency than errors in genetic code. And lets not forget that these happen once per "generation". A generation in this instance is just a single compiling of the code.
Also, don't forget the part viruses play in mutation.
Finally, understand that the majority of that 3 billion years was spent getting from one to a few cells. It exploded after that. And the robots are already starting out very complex.
"And yet, we're here discussing it. Both of us results of random mutation."
And for the heck of it, there is far more 'space' in both my hands than there is matter, and yet the sound of clapping happens when I try to pass one through the other.
That's a very pre-quantum view, Sevo.
I am unsure how that applies to mutation.
"But animosity towards humans probably doesn't need to be programmed--all the AI has to do is not care either way about humans, and it could be hard to tell [the difference] between...not caring either way about humanity and animosity."
I would be remiss in my responsibilities to my Gen X roots if I didn't include this link:
http://www.youtube.com/watch?v=K1EUxGZQoDU
It pretty much explains everything.
and it strongly suggests that humans try to legislatively head-off that long-feared development.
Right. Because legislation fixes fucking -everything-.
As quaintly charming as Reason's ludditism is when it comes to advances in military warfare, the fact of the matter is that there is nothing particularly noble or humanizing about personalized, "human" combat.
The reason this is wrong is because from the 50,000 foot perspective all human societies are merely ways to organize armed force.
Human societies have always tended to hover around the best available methods given the military technology of the time to organize armed force.
Looked at this way, the transition from imperial Rome's political order to the political order of feudalism (as an example) is mainly an exercise in Europe going from adapting its political order from producing large, disciplined, centrally-organized infantry armies to smaller, decentralized armies built around the armored horseman.
Modern political systems - basically democracy, one-party totalitarianisms, and populist theocracies - can be seen as nothing more than variants on ways to organize mass popular armies armed with standardized gunpowder weapons.
This being the case, if the need to raise mass popular armies goes away, all of those political systems will go away, too. And the question becomes: what sort of political systems are likely to arise if armies become "large numbers of killer robots controlled by a technical elite"? I don't know, but I'm betting they won't be good.
nothing particularly noble or humanizing about personalized, "human" combat.
The "good" thing about modern warfare (as an arbitrary line let's call modern warfare anything from Gustavus Adolphus forward) is that it has required large numbers of motivated men.
You could accomplish medieval warfare or Renaissance warfare with small numbers of motivated men, but to fight in the gunpowder era you needed large numbers of motivated men.
So societies that didn't want to be everybody's bitch had to sit down and figure out, "Hmmmm...how can we motivate large numbers of men to serve in our armies for shit pay?"
Some societies did that by gradually extending pieces of the sovereignty out to larger and larger groups of people. They became democracies.
Some societies did it by whipping people into a frenzy using propaganda, ideology and leader worship. They became one-party states.
Some societies used religion instead of ideology.
Some small nations managed to avoid these choices by getting weapons and monetary support from larger nations of one of the above three types.
But if you no longer have to answer that main question, why bother with any of this shit? It's all work. If the new question becomes, "How can I get my hands on the controls to the killer robot army?" you can dispense with all that work.
So I'm not really worried about self-replicating robots or SkyNet or any of that.
I'm worried about the fact that if it's 1930 and I want to kill all the Jews in Europe, I need the support of millions of men to do it. I need to convince them, somehow, to go along with me in my plan to kill all those Jews. And that's a huge pain in the ass to get done, so it doesn't happen all that often.
But if it's 2080 and I decide I want to kill all the left-handed people in the world, all I need is to fight my way into the robot army control room and then the rest of you get to eat my shit and like it. Forever. And left-handed people better Anne Frank it toot suite or my robot servitors will have them loaded on to trains by the end of the week.
Like we don't have enough to worry about. Doorknobs, lightbulbs, scissors, can openers, spiral binders, inkpens and computer mice, to name a few. Now I have to worry about robot genocide?
Um ok here is the problem with this legislation.
Lets assume for the sake of argument that somehow you get every nation on the planet to sign off on it and that they actually honor the treaty. You will still not prevent the development of armed autonomous robots.
The technology needed to create them is largely off the shelf hardware and then the programming, which itself is largely becoming off the shelf. From whatever date that a nation state could initially deploy such systems add less than 10 years and a hobbyist (or the mafia, or any terrorist organization) will have the ability to develop them. Wait another 10 years and you'll have 15 year old kids accidentially turning their household robots into killers.
Deciding to stop the development of relatively autonomous robots being armed essentially means stopping pretty much all research into robotics.
I think it's time to bring the killer robots home so they can be with their families in time for the holidays.
/s
"The report concludes, convincingly to me, that "[t]o comply with international humanitarian law, fully autonomous weapons would need human qualities that they inherently lack." Basically, we might give weapons systems the ability to hunt and kill people, but we don't know of any way to give them human moral judgment."
I'm thinking that leaving *human* moral judgements out of the trigger pullers, eh, manipulators would be prefereable - we have way too msny instances of human fighters killing innocents because of things like "a sense of self-preservation" or "I hate gooks" or UN sex slave camps from people who are quite capable of making human moral judgements already.