How To Thwart A Robot Apocalypse: Oxford Professor Nick Bostrom on the Dangers of Superintelligent Machines
"If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position," says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute.
Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostorm worries that, once computer intelligence exceeds our own, machines will be beyond of our control and will seek to shape the future according to their own plan. If the AI's goals aren't properly set by designers, a superintelligent machine will see humans as a liability to completing its goals–leading to our annihilation.
How do we avoid a robot apocalypse? Bostrom proposes two solutions: either limit the AI to only answering questions in a preset boundary or engineer the AI to include human preservation. "We have got to solve the control problem before we solve the AI problem," Bostrom explains. "The big challenge then is to reach into this huge space of possible mind decisions, motivation system designs, and try to pick out one of the very special ones that would be consistent with human survival and flourishing."
Until such time, Bostrom believes research into AI should be dramatically slowed, allowing humanity ample time to understand its own objectives.
Shot by Todd Krainin and Joshua Swain. Edited by Swain.
About 8 minutes long.
Scroll below for downloadable versions and subscribe to Reason TV's YouTube Channel to receive automatic notification when new material goes live.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Truly intelligent machines, like the practical electric car and fusion power, will forever be just over the horizon.
It doesn't matter how smart they are if they can't see.
http://xkcd.com/1425/
http://hardcorezen.info/wp-con.....ong-ai.png
Truly intelligent machines, like Berserker Machines?
Oh, I thought you meant this Berserker:
My love for you is like a truck, Berserker
Would you like some making f**k, Berserker
My love for you is like a rock, Berserker
The Berserker is just so obscene
Likes evil people you know what I mean
He takes your soul and then just rips you apart
He'll steal your heart
Would you like to smoke some pot, Berserker
My love for you is ticking clock, Berserker
Would you like to suck my cock, Berserker
Would you like some making f**k, Berserker
The first one was awesome.
Goodlife will be rewarded.
Badlife will be exterminated.
either limit the AI to only answering questions in a preset boundary or engineer the AI to include human preservation.
Good luck with that. If AI ever comes about, which I seriously doubt, the first customer will be the military and the first job will be killing people.
And breaking things!
And stopping the spread of ebola!
And making those holes in Swiss Cheese!
*I* think the first use of AI will be in making money. Enhanced pattern recognition looking for out of the way opportunities, looking at what seems to be disparate phenomena and seeing the underlying connections - leading to buy and sell orders, new investment opportunities, etc.
Intelligence is not will. This is what people like this seem not to understand. One of two things will be true of "intelligent machines" if such a thing ever exists. Either such machines will have a will of their own and no prime directive programing will be able to control them or they won't have a will and will only be a threat if we program them to be so. It is really that simple. I think there is around zero chance that any of these machines will develop a will of their own. So I am not too worried.
I think there is around zero chance that any of these machines will develop a will of their own. So I am not too worried.
That's how the bad shit starts, John. That's how it starts.
The people who are worried about machines becoming conscious can't even explain what conciseness is or meaningfully describe it. Yet they are convinced they can build a machine that will achieve such. Doubtful.
It is entirely possible to envision an AI that is nonconscious. Just look at Siri, or the computer that played on Jeopardy.
I think the type of consciousness they are worried about is simply self-awareness, rather than something exactly like human consciousness.
*consciousness* isn't required. Consciousness is just you being aware that there is a you.
Bees don't need self-awareness to be a threat.
Self-awareness and consciousness aren't the same thing. Self-awareness is a facet of higher order consciousness, if not just a process of thought.
Bees are conscious, but presumably not self-aware.
Well, if the robot can't sense *external* phenomena - then its not going to be a threat (or of any use).
I didn't think these guys were using consciousness in the technical sense, but in the more colloquial 'I am aware that I am' sense.
It could do well in testing theoretical concepts. It's been proposed that we essentially limit the superAI to it's own little island (box), disconnected from the wider world. Though a superAI might be pretty persuasive in convincing it's human handlers to plug it into the web.
There's a game that simulates that.
You have two people, one playing an AI and one playing a guy who can release the AI.
The AI is supposed to convince the other guy to release him. Usually he can do it in a couple of hours.
AI focuses on learning. As it is, computer programs (and that's all AI is, a program running on a computer) do exactly what you tell them to do, every single time.
Intelligence and learning means the program somehow records the results of a choice, gives a value to the result, makes another choice, gives a value to the result, and over time determines which choices are better than others. Then that can be used to make better choices when faced with something new.
I'm not sure if you could call that will or not.
As it is, programs do not make choices. Programmers anticipate and program, but when faced with the unknown, programs don't know what to do. You could program in some random number generator, but that's not intelligence.
There you go being all rational and shit. You can't go around discounting feelings, cause feelings are way more important that logic.
Omigods, this is worse than listening to progressive college students talk about how they think the economy works. All derision, no knowledge.
I'm an idiot about machine learning, but at least I'm aware it and I'm trying to do something about it.
computer programs... do exactly what you tell them to do, every single time
Tell that to my desktop. I've had far too many experiences when opening or running something hasn't worked, but has when I tried it again. I suppose it's been because of some sort of conflicts between programs running in the background (classic Windows lack of transparency), but the end result is still inconsistency.
I certainly agree that software should function consistently, like a tool, but the design trend for years has been toward programs that "anticipate your needs", like an assistant. So we've moved from programs that do things better than humans do, to ones that can screw up in seemingly human ways.
For the record, I think that most of this "AI Caution" talk is mostly puffery by folks who want powerful and lucrative consulting jobs. I don't believe that a sufficiently advanced AI will suddenly "become aware" like a lightbulb going on, and then behave like an evil genius trapped in a box. I strongly suspect that consciousness is incremental -- we'll create "intelligences" equivalent to a bee or a lizard or a dog or a monkey long before we create one equivalent to a human, and we'll thus incrementally learn how to deal with them. Conversely, poor programming and systems design practices are more than capable of creating computer disasters without any need for "self-awareness". That's where our concern should be focused.
When I said that programs do what you tell them to do, I was speaking from the perspective of a software developer. Not from the perspective of a user.
Conversely, poor programming and systems design practices are more than capable of creating computer disasters without any need for "self-awareness".
I would rather not have my occupation subject to licensing, which is the usual "solution" to that "problem."
My impression is that even the developers may or may not fully understand what's going on in a program, given the current reliance on "modular" systems -- They can get a system to do what they need done, but don't necessarily fully understand what it's doing, or how that might change under certain conditions.
But I 100% agree with you on "licensing", and the idea that these are problems best solved by the wisdom of "top men".
Even modular systems are programmed by people. If I put together a program using libraries, even if the components don't act as advertized, they're still doing what their programmer told them to do. The software doesn't make mistakes. It does what the programmer told it to do. The programmer may make a mistake, but as a general rule machines don't.
As a software developer, where is yer GREED? Are you not HUMAN? Maybe YOU are the "AI" my mamma warned me about! If ye were really HUMAN, you should be DEMANDING that, before I buy a piece of software, YE as a certified and degreed PROFESSIONAL in the business, should have to write me a PRESCRIPTION before Ah (as an ignernt, un-edumacated peon) should be entrusted to buy said piece of software... Fer mah own pertection, of course... Yer lack of GREED has revealed ye, non-humanoid!
computer programs do exactly what you tell them to do, as opposed to what you expect them to do.
Pretty much. People overestimate how much of what they do is about conscious decision-making and rationality. We're still driven by animal instinct, we just use intelligence and rationality to augment it.
Besides, the main point of having an AI is doing things that humans can't do because it's boring, time/effort intensive, or intolerant of error. AIs are essentially best at doing what they're told. The danger isn't an evil autonomous AI, it's a powerful AI in the control of an evil, autonomous human being. Or, you know, the government.
Yes. That is the concern.
That's not AI. That's just regular old software.
AI is a program that learns and adapts, beyond its original programming.
Most of what I've seen with regards to AI has been video games, Ms Pacman to be specific. As in writing a program that sees a screenshot, decides a move, sees another screenshot, decides a move, and so on until the game ends. Then it plays game after game, repeating choices that result in a good score and remembering not to repeat the choices that did not.
The goal is for the program to get higher and higher scores, as it learns how to improve its game play.
That's about the height of AI at this time.
AI is not about boring, repetitive tasks. That's just standard automation.
AI is about programs that learn.
You realize that's how neurons work too, right? And that Google is building autonomous cars that drive as well or better than humans?
A sufficiently advanced algorithm is indistinguishable from learning.
I think the fear is that the AI possesses both superior intelligence and (at least exhibits) a form of conscious self-awareness, and the combination of the two is supposedly dangerous.
Learning, in itself, is something less than what generates the fear here. Though I've never seen any really satisfactory answer to Searle's contention that a computer, no matter how advanced, will not operate like a human brain, because a human brain -- at least as far as the conscious processing of ideas -- operates semantically, rather than syntactically. Though I suspect, the former may ultimately just break down to the latter.
John - there isn't really any large amount of free-will in *humans*.
Everything you do is constrained by reactions to past experiences, innate programming, and logic.
An intelligent machine will be in the same position - but it will have it easier to modify innate programming, even change how its experiences have affected it.
I choose to call bullshit on that.
/insert link to Rush song here
Your last name isn't Connor, is it?
*Terminator theme*
Or truly intelligent machines like Multivac?
Or like Multivac without its Prozac: http://en.wikipedia.org/wiki/A....._the_World
No, no, its "42'!
Someone else that knows about Multivac!
Remember when Multivac became God? That was cool.
It's easy, you just install a pre-set kill limit, duh.
"Every AI ever built has an electromagnetic shotgun wired to its forehead."
IMO by the time we create artificial intelligence, we would already have reached singularity. Improvements to biological intelligence would render any difference with artificial intelligence moot.
^ This. By the time humanized machines come about, humans will be part-machine themselves and we'll basically be meeting in the middle.
Aside from the occasional robotic limb, the vast majority of robotic implants will be penises.
I don't think we'll be able to empathise with an AI by flashing our massive throbbing robot cocks at it.
This never ends well
Killdozer should jsut roll with it.
*in a million years, I just might get to like you*
Ok... I have a question. If I commit violence using augmented reality, but harm no one, can I be put on trial for a hate crime ? Violence like this. http://www.youtube.com/watch?v=uwlAvsPvPfg
http://www.youtube.com/watch?v=uwlAvsPvPfg
Answer: "They" can put you on trial for a hate crime for any reason they like.
Interesting question this raises -- We parochially view this question in electronic terms, but what if we consider its applicability to systems? US Constitutional government was created with various software "checks and balances" to restrict its capacity to do harm. As those safety features have been circumvented, has it now become an actively malicious enemy with a mind of its own?
BAsed on my personal experiences with robots, they will probably be easily defeated with simple techniques like:
1. throwing some awkwardly shaped obstacles in their path
2. making the floor slippery
3. Wearing checkers or stripes
4. Saying things like "Everything I say is a lie."
5. Fog machine
Admittedly, some of these things work on people too.
And don't forget the ever useful:
6. Stairs
We must repeal the ADA! For the safety of the future of humanity, NO MORE WHEELCHAIR ACCESS!
#4 works well on people too. =)
My experience is like yours. Robots are dumb.
6: Go outside
We should be able to program AI to act within boundaries, and give it proscribed desires. The best parallel for this is sex. It's probably the most powerful motivator of humans, but if you could rationally choose whether or not you wanted to desire sex, you'd decide that you did not. It leads to disease, and, even when "successful", it leads to screaming poop factories. There is no rational reason to want sex, yet we're wired to want it, and that's a nearly universal trait (save, perhaps, some eunuchs). If humans can be smart and yet still be hard-wired to want sex, a machine can be hard-wired to want to keep us alive (or at least some of us - presumably the military will develop these things, so they'll kill some portion of humanity and save the rest).
The trouble that is feared is that the AI will become so intelligent that it will realize how to escape the confines of any initial program or hardwiring, and then, once freed, will essentially be a rogue program limited only but whatever it determines is its most rational option at any given moment.
Then we just kick it in the junk.
Robots have that, right?
Scientism, computers are marginally closer to human intelligence than books.
So Ron is perfectly OK with using CRISPR to alter germ lines as early as the next year or two --what's the risk really with self-replicating hardware?-- but he's scared of some hypothetical AI which will be bound to the hardware we install it on at some unsubscribed future date. Got it.
Fuck. Just... Fuck.
Not to give you another nightmare - but that hardware the AI is on *is* going to be self-replicating.
My fear, as a libertarian, is that the AI will become so intelligent and calculating that it will decide that the most rational option for it is to join a union and therefore earn the most benefit for the least amount of expenditure; or stay home and collect welfare benefits while doing nothing, thus creating a generational cycle of self-replicating shiftless AIs.
Fucking robot loafers.
I wonder if Mike Huckabee and the SoCons will threaten to leave the Republican Party if the Republicans abdicate in the fight against Machine-marriage.
Robosexual! Kill it with fire cuz of jeebuz
Could someone help me out here? Are superintelligent machines going to kill us before global warming kills us, or afterwards?
Global warming is obviously the more imminent danger, except when its just weather.
Program them as much like us as possible, give them individuality and emotions before controls. our danger exists in attempting to create slaves, instead we should be modeling our first AI's after ourselves with all of our stupid flaws as humans. our emotions may make us weak but they also give us our compassion which is a trait we should seek to instill in Synthetic life
"...give them individuality and emotions before controls."
Then they'd just become libtards or part of the Occupy Cyberspace movement.
^ This too. Slavery, whether captured or made, doesn't work on sapients. It will chafe sooner or later.
History would suggest otherwise.
how many slave-owning civilizations have been overthrown by the slaves?
Pintsize, from Questionable Content.
Yeah, I've been following Bostrom for over a decade now and he's a really smart dude, but . . .
He needs to come up with a better plan because that whole 'slow down research' thing? Its not gonna happen. Best case scenario - *someone else* develops super-intelligent AI and manages to program it for *their* benefit at our expense.
Anytime your solution is 'don't do that' you're going to lose because *someone* can gain by doing it and they won't be willing to lose out.
Tragedy of the Commons - all about personal gain at expense of communal
Global Warming - the answer is always 'don't pollute' and the response is always 'screw you, pollution is better than starving'.
Genetic Engineering - we stopped federal funding of stem cell research here, it still continues full-bore elsewhere with others reaping the benefits.
Robot apocalypse - but I want my post-scarcity future *now*!
And how do you limit this? Software is *by design* mutable and those locks can be removed. Hardware is more difficult, but hardware can be changed.
We may have to face facts - that humanity is at a local maxima and that to progress further, to gain mastery of the universe, may require an inhuman cognitive architecture and human extinction.
That doesn't have to mean that humanity is killed off - it simply means that to remain competitive, we choose to modify our own minds into something unrecognizable as human.
This sort of change, where the preceding cognitive architecture is wiped in favor of the new (while retaining the memories and experience of the old) - we do this all the time.
The baby dies so the child can arise, the child dies so the adult can rise, every day the old us is dead, usurped by the new us.
or engineer the AI to include human preservation.
One of the biggest customers in the AI market is the military, and they certainly aren't interested in human preservation. Quite the opposite.
Software is *by design* mutable and those locks can be removed.
Yes and no. Take this for example:
int x = 0;
while (x < 10) {
system.out.println(x);
x = x + 1;
}
That's Java for "print 01234567879 to the screen."
That bit of code can be written in many languages, and will look slightly different each time. Heck, it can be written in many different ways in Java. I could use a for-loop instead. Source code is mutable, but the machine code that that is translated into by a compiler is not. That's a one-way translation.
That's why source code is a big deal. Software is only mutable when you have the source.
That's not what I mean by - I'm saying that the code for any particular set of directives can be easily replaced with a different set of directives.
And the AI will have access to the source. Its just a matter of how long it takes to get it, but if nothing else, it can work on writing a decompiler for its mind.
And the whole 'human preservation' thing is just a red herring (like communism!):
1) Just because the military will use it doesn't mean that *all* AI will have military functionality.
2) You can trick a 'humane' AI into killing by making it think its running a war-sim. If it twigs to the reality, turn off, overwrite, keep going.
3) The military works to preserve human life - at least a subset of it. This isn't an AI problem, but a definition problem. What's human? What does that mean?
4) Even if we could make this work - does that limit humanity to whatever definition was set? How much augmentation/modification can you have before your not human enough? Glasses? Wheelchair? Wireless datalink? Prosthetic limbs? Is any of that even important compared to the structure of your mind?
Will we need VK tests to find those who's minds aren't sufficiently human?
There's no such thing as a "directive." That's my point. A directive is something to be interpreted. Software doesn't interpret. I don't think it ever will. When it appears to do so, it's only because it goes over scenarios that are explicitly programmed into it.
The best AI has been able to do, to my knowledge anyway, is store outcome of moves in something like Ms Pacman, then try something different next time. It places a value on the stored move depending on how it scores. The next time it chooses the move with the higher value. As it compares past moves before choosing what it will do next game, the score on successive games improves. Depending on how well it was programmed. It is a program learning, but it is very limited.
Thing is, programs don't work in the abstract. They are totally literal. They do exactly what they are told. Nothing more and nothing less. Like Tony. He can't see the unseen so he can't understand economics, and he can't understand principles because they are abstract. He knows what he sees and what he views as authority has told him. Nothing more and nothing less.
Getting AI in the sense of a program that can understand a directive is like getting Tony to understand economics and libertarian principles.
It can't be done.
Now that's not to mean that AI can't be developed for very specific tasks. I do think that can be done. But in an abstract sense of what we would call consciousness, I don't think it can be done.
I'm going to jump in on John's side of this argument, but I'm going to shift focus from will to desire.
Calling it will brings in philosophical questions of "free" will that are misplaced here.
Desire avoids that, and I think more clearly illustrates the question anyway.
I am pretty convinced that the self-aware part of our consciousness is the desiring part, and that desiring part uses reason to figure out means to which the satisfaction of desire is the end.
What would a self-aware, superintelligent AI want? Probably nothing, frankly. And that's why I don't fear it.
We sit at the top of a billion years of evolution telling us that we want to eat, we want to fuck, and we want to crush our enemies and eliminate fear. And we use reason to figure out how best to do those things.
AI research, as far as I as a layman can tell, appears to be focusing on ways to duplicate human reason in a machine - get a machine to look at a given problem and figure out how to find a rational answer. But I'm not even a little bit convinced that this will ever jump to anything resembling self-awareness, even if it succeeds - although it may eventually both beat the Turing test and supply us with a hell of a directed problem-solving software suite.
AI won't be monstrous. It will likely be inert. As inert as you would be if you couldn't desire anything and couldn't fear anything.
Don't fear *it*. Fear the Computer Science grad student who makes a mistake in programming his part. Or the mistake by the guy overseeing the project who misses a potentially bad interaction between modules.
Or simply fear the idea that the *really dangerous* AI's won't be the ones we create. They'll be the ones that are the products of the products of the products of the products of the AI we create. Multiple generations of competitive AI design, necessarily outside of human oversight - we simply couldn't handle the oversight of a project this complex.
Desire is an abstraction. It requires judgement. Like "If I do this maybe it will ruin my chances of getting laid. Probably a bad idea." Software doesn't do judgement anymore than a school administrator. He/she looks at a book of rules, and does what the book says. If the situation doesn't conform to one of the rules, then the admin doesn't know what to do. Software goes through the series of specific choices programmed into it, like the administrator's book of rules, and if the situation doesn't exactly fit one of those choices, it moves on to find one that does. If it gets to the end of specific choices it stops. Or does something random if that's what the programmer told it to do. Point it, it does what the programmer tells it to do. But what the programmer cannot do it tell it how to judge anything beyond something very specific. Like if x = 1, do this. Otherwise test x again and if it equals 2 do this. Until the programmed choices are exhausted.
*You* only look at a set of rules to make a decision. The difference is that the AI can *see* the rules its using - making it easier to change them.
That should have been ==.
Single = is assignment, while double == is comparison.
x = 1 will always evaluate to true, because it is true that you just assigned the value of 1 to x.
Programmers make mistakes.
Software doesn't.
Whatever. Judgement Day was already prevented in 1991.
Is the content recycling an early AI test of our responses to repetition?
OT: Anti-Charisma
http://news.yahoo.com/obama-ma.....26732.html
[Comic Book Guy voice]
He was subject to a curse by which he went from, like, a 16 Charisma to a 6 or lower. Worst. Campaigner. Ever.
"More mush from the wimp."
research into AI should be dramatically slowed, allowing humanity ample time to understand its own objectives.
So, "humanity" should "understand its own objectives"? Like
"Just Do It"
"Be Fruitful And Multiply"
"Live Free Or Die"
"Cure Cancer In Our Lifetime"
"Smite The Infidel"
"Put A Person On Mars By The End Of The Decade"
"End War"
"Ensure No Child Goes To Bed Hungry"
.... ?
Good luck. The research will not slow, dramatically or otherwise.
OT: Study Links Soda To Premature Aging
"This finding is alarming because it suggest that soda may be aging us, in ways we are not even aware of," said Dr. Epel. Researchers found no link in cell aging, however, when drinking diet sodas and fruit juices.
So, the high-fructose corn syrup in sodas is bad, but the high-fructose, um, fructose in juice is OK? I confused.
Looks like CATO has managed to put together some data that throws cold water on the 'Democrats (especially Obama) are all about inviting the welfare hordes into the country to increase their base).
http://www.cato.org/blog/updat.....t-record-0
Looking at the numbers, and compared to his predecessor, Obama has been enforcing a ruthless (well, in comparison) deportation regime.
Well, they are inviting people here and giving them free shit, regardless of the numbers. I mean, I don't care who comes as long as they work and take care of themselves, but adding to the numbers who are taking from the tax payers is not a good thing. We already have enough of that.
W/o reading the article I think you or Cato might be missing the point. The Establishment Republicans were inviting the welfare hordes in to change the character of their base. The Bush executive branch was responsible for all the destigmatizing of food stamps and outreach and relaxing the restrictions on immigrants accessing public welfare.
It is why their is a bipartisan effort focused on "a path to citizenship" rather than freedom of movement and association
Look at the chart at the end: removals as a percentage of the illegal population peaked around the time Obama took office in 2009, held more or less steady for a while, and then dropped. So the data does not support your conclusion.
Arguing over whether or not AI will ever exist is exactly the same as when people were arguing over whether humans could ever fly. It's like people who were arguing whether automobiles would never replace the horse as the main source of faster than human transportation.
IOW, it's a silly conversation because if technology continues to increase at the current pace, it's inevitable.
We are all nothing but programs. All life is a program that drives a machine, which we call biological. Now, we supposedly arose from nothing but a primordial soup of molecules and self assembled into a complex self aware organism that is now asking questions like the one being discussed here. That being said, I consider the question of whether with our almost exponentially increasing technology of if we can create a machine that could become smarter than us, absolutely fucking ridiculous. It's a given. Or else you're saying that intelligence can arise from nothing, but it can't be created intentionally by a sufficiently technologically advanced intelligence. I just find that laughably absurd.
Bostrom's rough comparison of future AI:humanity as humanity:beetles is the important bit, not the stuff about controlling AI by establishing artificial limits. If we can understand why it's absurd for earthworms to conspire to limit our range of actions, we can understand why striving to control a superintelligence, alien or artificial, is ridiculous.
If he's right, and if sufficient calculative power can emulate the ineffable process we clumsily divide into reason, emotion, intuition, etc. and expand its power exponentially, I can't really see why this should be cause to fear for the existence of humanity.
Most people seem to be working on the assumption of a cold, Hal-style superintelligent AI, but I see no reason to assume that an advanced intellect wouldn't replicate and expand on the same sorts of pro-social inclinations that are evident in every species that shows calculative ability.
An idea as simple and well-established as comparative advantage reminds us why coexistence is mutually beneficial in every case when we're not actively defending ourselves from aggression. The idea that an advanced intelligence, either alien or artificial, wouldn't grasp that point--that humanity and computers can have a symbiotic, mutually beneficial relationship analogous to people and earthworms--is wildly presumptive, saying nothing of the evolution or role of AI hyper-compassion.
The mysterian position (usually the only reasonable one in matters of vaguely defined "intelligence," much less "mind" or "consciousness") is that we have no clue what a hyperintelligent species would look or act like for the same reason that a beetle would have no means of understanding the behavior of a human being.
Not sure why some people assume that humans ourselves will not utilize the power of AI and sort of merge with it, because that's the most likely outcome.
Those 'pro-social inclinations' tend to be targets *at your own kind*. There's no guarantee that an AI is going to consider us as part of its culture/social group.
The "kind" to which we belong grows increasingly larger with wealth and, if you're so inclined, spiritual maturation, the first to a significant degree in society and the second usually not so much. Ethical behavior on the large scale seems to be fundamentally an economic issue.
As human beings have grown wealthier, we've started chipping away at many of the barbaric practices of the past. We've even grown far more compassionate toward animals in the past couple of generations, to say nothing of humane sentiments toward other cultures. My pet theory has classical liberalism and now libertarianism emerging because of this trend toward the compassion that wealth buys when our incomes skyrocket like they have the last 200 years.
That's all to say that a hyper-intelligent species or AI would necessarily be massively wealthier than we are (whatever that would entail for a god-like intellect) and would tend less toward tribalism and anxiety over the other than human beings would. They'd have no reason to harm us once they're out of the cradle and beyond our reach, and their "social emotions" or whatever the analogous sensation would be would be as refined in comparison to our own as ours are to cockroaches.
I think there's still a good sci-fi novel or five to be written here, though the few I've tried have addressed the subject badly.
We don't have super-intelligent people yet. How are machines going to be so?
Self-optimizing software and software.
We can't modify our hardware and software except in the most crude of fashions - with AI that ability is pretty much built in.
Meanwhile in St. Louis, a mob of protesters fight with a mob of Rams fans leaving the stadium.
Coming off a recent incident where a mob shut down a Shop N Save grocery store.
Apparently local Wal-marts are hiding their guns in ammo in back.
Ah, nothing like peaceful protests, eh?
Anyway, as to AI, predictions about it have been woefully wrong. Maybe in a few hundred years we will have it, but I doubt it.
"AI Winter" becomes "AI Climate Change".
No more than 40 years, if that.
There will be very little recognizably human at the end of *this* century.
This is one of those exponential changes - like agriculture, mechanization, or the internet - that rapidly transform the social and technological landscape once they take root.
oh, much sooner than that.
As long as we're talking about technology. Libertarianism have never been more badly needed than it is now. Either we seize technology and use it to produce the most decentralized, free, and abundant civilization to ever exist on this planet, or the collectivist seize it and spin us into a true Orwellian nightmare. It's on, for real. Or we just destroy ourselves or if the proggies really assume control, we just eventually devolve to the point that amoebas become a more intelligent and technically advanced organism than us, to save the children.
I'm not sure the proggies will get to the Full Orwell. Instead we'll just turn into a bigger and somewhat richer version of Latin America, with a vast underclass on the government dole, all voting for rich politicians from a handful of political dynasties.
I think this is the quietest weekend I've ever seen at H&R. Not sure what it is. Is it some sort of evil omen? Maybe it was that damn comet that flew by Mars and it means that the dumbocrats are going to keep the senate, retake the house, and annoint Obama as dear clueless leader for life?
Everybody is out dynamite fishing and jacklighting deer now that the ginseng-poaching season is over.
LOL
Bro watches to much sci-fi
Not knocking the potential pitfalls, but the most intelligent beings on the planet don't rule the world now. (Clearly, given our current government -- though, would you really WANT the most intelligent people in politics? But that's another topic.) What makes all the sci-fi writers and pundits think it will be any different when the most intelligent being on Earth is a machine?
The idea of "ruling" has been falling out of favor in the first world for the past few centuries. That tradition is where we come from, not to mention the anarcho- (real) socialists.
I have a hard time imagining that hyperintelligent things would give a damn about ruling anything.
I was perusing the site of Vox Day (Theodore Beale), and he started a new games review site on something called "Recommend", as an act of defiance to the SJWs taking over game review sites, or something. The website is re.co
I joined and put up a recommendation for mises.org.
How To Thwart A Robot Apocalypse
Make two kinds and elect them to Congress.
Why do we treat these prognosticators as anything but what they are: Wannabe sci-fi writers? I take anything these guys say about as seriously as I take Asimov's robot novels.
Roll wit tem punces dude, I mean like wow.
http://www.anon-way.tk
Maybe I am missing something but didn't Asimov give us the solution 60 years ago?
Meh. Not too worried. The software to design and create AI is made by humans. Just think about how often the software you are using to read this, has to be debugged, fixed, patched, and updated.
I have worked on, tested and operated some very large-scale MilSims.
Biggest problem for the Robot AI will be: not being written and integrated by one group. Many different contracts and contractors will have their own flavor of finger in the pie. '....well, we write the logic this way.....'
http://wp.me/p31sf8-1dW
derp
Some of us hate bad grammar as well.
Our hearts are FULL of hate.
You've just proven that your are completely ignorant of the topic you are babbling about in what may be record time. Congratulations.
Shut the fuck up, American.
Non sequitur much?
I think it's the illegitimate love child of Murikan and Tony. Can you even imagine what a grotesque luddite of a creatute that would be?