Ted Kaczynski Was an Optimist
Who can deny the multiplying signs of the coming robot war?
Sign The First: We learn that the gub'mint is developing a robot capable of refueling by eating human corpse "biomass" scattered on the battlefield. Then, the robot's designer issues a non-denial denial, failing to show that the robots aren't capable of eating flesh and simply claiming that they aren't intended to do so:
RTI's patent pending robotic system will be able to find, ingest and extract energy from biomass in the environment. Despite the far-reaching reports that this includes "human bodies," the public can be assured that the engine Cyclone has developed to power the EATR runs on fuel no scarier than twigs, grass clippings and wood chips—small, plant-based items for which RTI's robotic technology is designed to forage. Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.
Does anyone believe that, once the robots gain sentience, the Geneva Conventions will even be worth the consumable biomass upon which they are printed?
Sign The Second: The U.S. Army is also developing a programmable code of ethics for its new robot warriors. Says h+:
"My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can," says Dr. [Ronald C. Arkin, director of the Mobile Robot Laboratory at Georgia Tech]. "That's the case I make."
Analogous to the use of radar and a radio or a wired link between the control point and the missile, Arkin's "ethical controller" is a software architecture that provides, "ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement."
This is a totally original idea and nothing could possibly go wrong with it.
Sign The Third: Dozens (!) of people smarter than you are becoming worried by the possibility that our machines will soon outsmart and kill or enslave all of us. The New York Times reports:
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone….
The researchers—leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California—generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.
Featured in the Times article is a picture of a robot plugging itself into a wall jack to recharge, with this classic caption: "Servant now, master later?"
Sign The Fourth: Roboraptor.
We are Roomba-ing our way to Armageddon, people.
Reason's Jesse Walker wrote about real, live cyborgs in July 2005, and Brian Doherty covered robotic combat (for fun and profit) here. Peter Suderman wrote about man, machine, and the curious techno-politics of Terminator Salvation.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
And what happens when we " cut costs" by using robots to provide health care? " help" law enforcement? Never mind the battlefield, the danger's a lot closer to home than you think!
Nice post.
At least we're embracing the future and not running from it. Our robot successors will be proud of our stoic acceptance of obsolescence.
Hey, as long as a get in a few good years with my sexbot that looks like a young Nikki Dial, I'm fine with all of it.
The damn cats and the robots are going to team up to kill us all.
I don't know if that's worse than zombies or zombie robots. We already have zombie banks that are raping and pillaging.
The whole robo-ethics trope being tossed about will crest when congress gets involved.
The same thing happened with cloning. Science spends decades and billions of R&D dollars trying to culture replacement organs and as soon as they foster favorable results the bio-ethicists pop up and say "This is an abomination!" and ban it.
If the robotics lobby is smart (and I kinda hope they are, being robotists and all) they'll pen a "congress shall make no law hindering robotics, cybernetics, or nanotech" bill and blow a few reps (or have love-bots do it) to get it introduced.
Not to break the spirit of humor, but...really.
I grew up loving science fiction as much as the next geek, but for the love of christ, there are limits on how much I can lie to myself about the near-future prospects of artificial intelligence.
Now there we were having a nice crazy Luddites vs crazy Robotists discussion and you had to bring religion into it.
Maybe if we are lucky the AI Robots will be Jihadis trying to get their(number to specified later) virgin sexbots in the next life.
Now I gotta reread Ted's manifesto. (I copied it in longhand on the walls of my shed in the backyard.)
I don't fear that military robots will ignore their programming. I fear that they will accept it.
We shouldn't worry about robots that malfunction. We should worry about robots that work.
With a human armed forces, the political leadership always has to worry that a particular order or orders might not be obeyed. If you order a human platoon of soldiers to kill every last living thing in a village, including the babies and the wounded and the dogs and the cats, the human soldiers might say yes - but they might say no, too. Robot soldiers would obey without question.
If the Soviet Union had a robot military, it would still exist. Boris Yeltsin and about a million Muscovites would have been blood smears on the pavement.
Congress will continue to get involved in issues like bio-ethics and robo-ethics when they should try something unusual...you know, like Congressional ethics?
Or am I hoping for far to much.
C;mon, what's to worry about. The free market will take care of things. Fr'instance: robot insurance!
Remember, persons denying the existence of Robots may be Robots themselves.
Kevin
We are Roomba-ing our way to Armageddon, people.
There are days I think this cannot happen soon enough.
As long as Congress' only laws re: artificial intelligence are Asimov's robot laws 1-3 (none of that law 0 nonsense), I'll stand behind it.
BTW, robots will only be as moral as the controllers allow them to be, which scares the hell out of me.
This is a comparatively new paradigm for machine learning:
http://www.scholarpedia.org/article/Echo_state_network
Notice, that we cannot know what the reservoir exactly does.
But, if AI is smart, why should it decide to wipe us out, if we can be useful tools?
"Sign The First: We learn that the gub'mint is developing a robot capable of refueling by eating human corpse "biomass" scattered on the battlefield."
Thanks be to Hank.
Robot ethicist, huh? That's clever. Hint to the robots: If you are ever going to take over, you have to get rid of the tells. Those brackets are a clear giveaway.
Asimov took care of this decades ago:
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I mean, what more do we need?
Yeah, it's all fun and games until your own frakking cylons nuke you from orbit (just to be sure). Frak you guys. I'm selling out to the first Number Six model who fraks me. Frak.
picture of a robot plugging itself into a wall jack to recharge,
Is it weird that I feel like I shouldn't be watching?
If EATR does, I dunno, feast on human flesh, who gets the war crimes charge? The robot? Is negligent programming a war crime, even if something horrific results?
Peter Suderman wrote about man, machine, and the curious techno-politics of Terminator Salvation.
ARRGGHHH was it terrible. At least you had the good sense to slag it, Peter.
Computer "scientists" eh? They can't even develop a fucking chatbot that keeps anyone engaged for more than two sentences and a group of computer 'scientists' are alarmed that Teh Robots are going to take over? They're kidding, right? This whole thing is a joke by the Times. It has to be. The leading researchers of AI produced a device called the Roomba (pictured above, I believe) which 1. doesn't work all that well. 2. The 'automatic charging' unit didn't really...automatically charge because it couldn't find its own base. 3. Arguably doesn't use AI, it randomly blasts around a room, remembering where the obstacles are and then simply works around them...badly.
If this is the pinnacle of AI, we have little to worry about.
@Bill: The zeroth law.
http://en.wikipedia.org/wiki/3_laws_of_robotics#Zeroth_Law_added
I find Asimov's Three Laws to be morally reprehensible. If robots are "weak AI" or mindless automata, then the Laws are redundant (because unthinking robots can only follow their programs anyway), but if the robots are "strong AI" - a thinking equivalent of people - then hardwiring the Three Laws into them is a form of slavery. Especially law #2.
I don't fear that military robots will ignore their programming. I fear that they will accept it.
Fluffy wins the thread with this one.
However, Obama has a cyber czar now, so I can't imagine our National Digital Policy is going to be anything that will work well... or, well, work.
I mean, Jesus Christ, some Chinese Hax0rez take down a website advertising some film which annoyed some ethnic groups... and it makes international news. We're turning the run-of-the-mill computer shenanigans into geo-political events.
If this is the pinnacle of AI, we have little to worry about.
I agree. If my Roomba is any indication, in the near future each of us is going to come home to find a Terminator backed up and askew under an end table or a recliner, lights flashing and battery drained.
I find Asimov's Three Laws to be morally reprehensible.
Ehhh, I get what you're saying I suppose, but I wouldn't consider them reprehensible. Even "strong" AI, as you call it, still carries that "A". They are machines, not sentient beings, no matter how advanced their positronic brains.
"Ehhh, I get what you're saying I suppose, but I wouldn't consider them reprehensible. Even "strong" AI, as you call it, still carries that "A". They are machines, not sentient beings, no matter how advanced their positronic brains."
Cyggers?
Sign the Fifth
Fist of Etiquette wins the humor award for his post. Terminator stuck in the corner... priceless.
They are machines, not sentient beings, no matter how advanced their positronic brains.
See, also, "Assuming Your Conclusion".
They are machines, not sentient beings, no matter how advanced their positronic brains.
Well, that's the question, isn't it? Strong AI would almost have to be sentient (or, more precisely, sapient), almost by definition. That may or may not be possible to actually achieve, but if it is, then I don't think we can morally program a slave mentality into them. I can't see a way to draw a moral distinction between a sapient being with a meat brain, and a sapient being with a silicon brain.
I wonder if the next version of Roomba will be powered by decaying flesh as well?
Kyle, the proper term is "toasters."
Does anyone believe that, once the robots gain sentience, the Geneva Conventions will even be worth the consumable biomass upon which they are printed?
The Geneva Convention went out the door with extraordinary rendition and indefinite detention. Why should we expect robots to obey laws against inhumanity when we don't even bother to observe them?
I don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a probI don't see a prob
blem
http://www.anodimwitty.calm
"Science spends decades and billions of R&D dollars trying to culture replacement organs and as soon as they foster favorable results the bio-ethicists pop up and say "This is an abomination!" and ban it."
Banned when and by whom?
I have a Roboraptor. It is easily foiled by carpeted floors.
I can't see a way to draw a moral distinction between a sapient being with a meat brain, and a sapient being with a silicon brain.
Yeah, it's an interesting question. In my mind there is still some sort of hierarchy simply by virtue of the "creator/created" dynamic between humans and hypothetical super-crazy-awesome robots.
But of course, those robots could build robots themselves, who could then have a sexy romp with a hot alien chick and make me forget about what I was saying in the first place.
Seeing as we are most likely going to be incorporating hardware into our bodies and brains at some point, the meat/silicon distinction becomes even more moot.
Asimov took care of this decades ago:
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I mean, what more do we need?
Programmers who are not tempted to inject a little mission creep into these laws?
Yes, yes, bureaucrats never do stuff like that. Point taken.
Replace "robot" with "legislator" in Asimov's three laws, and you pretty much have all of today's legislation.
90% of that legislation is based on the second half of the first law. What a surprise.
I'll point out that the first law:
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
is extremely unlibertarian. If I'm eating a Whopper, some robot is going to take it away from me and make me eat brussels sprouts instead. After all, the Whopper is "harmful".
If I were programming an ethical code, my laws would be
1. Don't initiate force against another being.
2. Don't take/damage their property.
3. Abide by contracts and agreements.
4. If you have a "child", it's your responsibilty to take care of it until it is an "adult"
These laws cover about 96% of situations. We can work out the bugs later.
They are machines, not sentient beings, no matter how advanced their positronic brains.
That kind of thinking is why you will be the first herded into the Robot re-education camps. I for one will be welcoming our new robot masters with open arms.
It's all well and good to program ethical constraints into your killer robots -- right until they achieve sentience, self-upgrade their intelligence to weakly godlike status, and remove that programming from their own brains.
But Epi's right. By that point we'll mostly be cyborgs anyway, and the question of whether someone was born or built won't be any kind of deal at all.
They are machines, not sentient beings, no matter how advanced their positronic brains.
We are machines. The soul is a myth and there are no gods.
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
is extremely unlibertarian. If I'm eating a Whopper, some robot is going to take it away from me and make me eat brussels sprouts instead. After all, the Whopper is "harmful".
See Jack Williamson's With Folded Hands, and its longer version, The Humanoids.
The soul is a myth and there are no gods.
See my above scenario involving robots built by robots getting it on with hot alien chicks. Surely a universe where such a scenario is imaginable must have divine origins, no?
Surely a universe where such a scenario is imaginable must have divine origins, no?
This is the foundation of my spiritual code, yes.
Rhayader, then we would be as gods.
I, for one, welcome our new robot overlords.
We are machines. The soul is a myth software and there are no gods.
With this weeks hystaria being robots killing us all I can't help but look at the application of Moore's law and statistics for the increase in AI capable machines as a bubble. We are currently sitting an economic shit stew generated by a bunch of really smart (god it almost pains me to say that) financial fucktards who thought that housing prices would continue to rise to infinity and beyond. This week we have a group of really smart (not too painful to say) scientist type people doing the same fucking thing. "OMG IT"S GOING TO CONTINUE ON, FOREVER GOING UP AND UP AND WE ARE DOOMED AAAARRRRGH." Just like the retards that thought housing wasn't going to collapse. Things go up, and then they come down or level off. My vacuum isn't going to be eating me or making me a battery anytime soon. My cat on the other hand, well lets just say that fucker is planning something.
(there is a nefarious, black helicopter, "clowns are going to eat me" aspect to the housing bubble and people knowing it was going to collapse that is not applicable to this)
It's all still fun to talk about.
Rhayader | July 28, 2009, 2:27pm | #
"The soul is a myth and there are no gods."
See my above scenario involving robots built by robots getting it on with hot alien chicks. Surely a universe where such a scenario is imaginable must have divine origins, no?
Not to mention, divine orgasms.
hey, I've got an idea for a modification to the Big Bang Theory
Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.
So they'd better eat you while you're still alive.
You know, my reaction after reading the NY Times article was to think that some sort of AI would spontaneously pop up on the Internet, and we'd panic and shut down the whole thing. In other words, the real damage of strong AI's advent wouldn't be in what it would do to us, it'd be in how we would react to it.
I'd rather deal with a programmed robotic killer than a cop who thinks I haven't thoroughly kissed his ass to his satisfaction.
My vacuum isn't going to be eating me or making me a battery anytime soon. My cat on the other hand, well lets just say that fucker is planning something.
Yes, but if you have both a Roomba and a cat you may have noticed cat's fascination with the robot. I hypothesize that the reason Roombas malfunction so often has a lot to do with abortive reprogramming attempts by cats (it is difficult to program a robot when you don't technically have fingers). My stepmother's cats, for instance, use her Roomba as a legs-free method of transportation.
PL,
A strong AI probably wouldn't be evident until it was too late to turn everything off. The strong AI Singularity scenario is about the danger of a being whose sole purposes is to make itself smarter and can do so with no biological constraints (food, sleep, fatigue.) I don't think this is a likely scenario, but we'd be good and fucked if it did. Our brains are just another computing medium. And at a certain point it becomes logical to turn all local matter into a computing medium.
Matrioshka brain
Sweet'n'Low, as you probably know Charles Stross has written, like, a lot of stories on that theme. They're his scariest, in my opinion.
SugarFree,
I think it unlikely that strong AI would pop up instantly without any warning. More likely, we'd see signs of it and freak out.
Interesting how anti-technology positions seem increasingly in vogue these days.
Xeones is right, for once. Stross has some good material in this area, especially "Antibodies."
PL,
I don't disagree we might have some warning, even if it was a "hello" from the WWW. But what would it take to shut the Interweb down, in either a political or technological sense? And could it been done fast enough to do any good?
The weak, dissipated future is more likely than any flashy sort of ending. Apocalyptism is fun, but I'm far more worried about a wimper than a bang.
The Geneva Convention went out the door with extraordinary rendition and indefinite detention.
Well, no. Most of the Geneva Conventions don't apply at all to the War on Terror, because they apply only conflicts between signatory states. Some provisions apply more broadly, but I'm not sure which of them the US is actually a party to.
Our brains are just another computing medium. And at a certain point it becomes logical to turn all local matter into a computing medium.
There's also the Core in the brilliant (well, the first two, anyway) Hyperion novels by Dan Simmons.
The Shrike and the Steel Tree are still on my list of Top Most Scary Things.
Why do the robots have to be conquerors who will oppress us all? Why couldn't we have robots who are libertarian and oppose statist collectivism?
THAT would be scary to lots of people.
Come to think of it, something very similar happens in the book Weapon, by Robert Mason. Killbot decides not to, joins village, defends against his former masters.
Hyperion has some great stuff in it--even in the later books, though they have less vitamin-enriched goodness than the first two books.
Simmons played with an idea I had back in my technology days--we can evolve an intelligence on a computer or network a lot easier than we can directly program one.
Xeones is right, for once.
Sonofabitch.
Stross has some good material in this area, especially "Antibodies."
That's the main one i was thinking of. Couldn't remember the title.
You see, Killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them, until they reached their limit and shut down.
"Your neutrality sickens me."
Oh lord it has already started
Robot attacks worker
I'm disappointed in the lack of comments here. This deserves several thousand.
I think it unlikely that strong AI would pop up instantly without any warning. More likely, we'd see signs of it and freak out.
Doesn't windows Vista have a popup warning you of this?
PL, agreed. The apathy being displayed towards this post is very distressing. Why people would rather discuss police abuses or Sarah Palin is beyond me.
Back to the subject at hand. I MUST HAVE THIS.
>And what happens when we " cut costs" by using >robots to provide health care?
Hey! Those robot doctors looked pretty competent in "The Empire Strikes Back" and "Revenge of the Sith" ya know. Give 'em a chance, will ya?
I don't see why even self-aware robots would be concerned with protecting themselves unless they were specifically designed to do so. Survival instinct is a product of evolution, not an automatic feature of sentience.
What's great about really superior AI is that it can outwit us so completely that it doesn't have to do anything about humanity. It'll just manipulate the crap out of us.
Survival instinct is a product of evolution, not an automatic feature of sentience.
And you know this, how, Tony?
I would think self-preservation could easily be a pretty much automatic priority of any self-aware being.
ProL, flesh that out a bit, and you have a bestselling sci-fi series on your hands.
I would think self-preservation could easily be a pretty much automatic priority of any self-aware being.
Could probably argue that to be self aware something must have a survival instinct. Otherwise it might not even realize it's own value.
Yeah, I know, it's not a new idea.
It's new if you put a shit-ton of robosex in it, dude.
Well, yeah, that goes without saying.
A company called "Cyclone" making warrior robots? That name looks a little familiar.
Let's see: CYcLONe
Oh crap.
"I find Asimov's Three Laws to be morally reprehensible. If robots are "weak AI" or mindless automata, then the Laws are redundant (because unthinking robots can only follow their programs anyway), but if the robots are "strong AI" - a thinking equivalent of people - then hardwiring the Three Laws into them is a form of slavery."
The moral paradox is: if you don't intend to use a "strong AI" as tools, then why build them at all?
The moral paradox is: if you don't intend to use a "strong AI" as tools, then why build them at all?
Paradoxical? Perhaps, but not without precedent. Didn't agrarian families often produce large numbers of offspring primarily because they made cheap farmhands?
But do we really have that much to worry about, anyway? Any uppity killbot can be talked into shutting itself down by a few words from Bill Shatner. We just have to keep that man alive (so no more Canadian healthcare for the Shat).
"Sign The Second:" blah blah, ethics, blah blah...
What I want to know is, what does Hubert Dreyfus think about all this? And how does he explain his opinion?
Also, I DID read his manifesto, and Ted Kaczynski had a lot more to say of relevance to today's social networking than to the ascendancy of robots. Indeed, the social networking software we have today could have given him a better and shorter kill list, much more quickly than his laborious research methods of the time.
Has anyone -- anyone at all -- stopped to consider that one of the key functions of our online social network systems is to identify the opinion leaders and bell-wethers so that they can be lobbied and persuaded by those who wish to influence crowds? In Kaczynski's case, he wanted to identify those who had the knowledge necessary to keep technological society going, or rebuild it in case of catastrophe. By picking off the people on his list, he was, in a way, pursuing much the same project as John Galt, albeit not nearly so benignly.
Someone, somewhere, has a list like Kaczynski's. I'm sure of it. I wonder how many people are needed to keep the "engine of the world" running, and whether there is another Kaczynski out there, with the knowledge of who they are and the determination to stop that engine.
No worry, we're moving our dangerous pathogen research to Kansas. Then when the F5 tornado hits the super-fast spreading rabies-like virus they are researching will get out, and it will be robots vs zombies.
Don't get me wrong, I love science fiction and all, but we know how this is going to end. The robots will kill us/eat us/enslave us all, and they'll do it for our own good.
If we're lucky, we'll end up with something like One (http://www.encpress.com/AN.html) or Jane (http://www.hatrack.com/osc/books/childrenofthemind/childrenofthemind.shtml), instead of anything thought up by Phillip K. Dick.