I, For One, Welcome Our New Robot Overlords
The Future of Humanity Institute (FHI) at Oxford University is holding its Winter Intelligence Conference in a couple of weeks to try to figure out how to prevent a future robot uprising that would destroy humanity.
In a recent article fellows from Center for the Study of Existential Risk (CSER),* Huw Price, Bertrand Russell Professor of Philosophy, Cambridge, and Jaan Tallinn, Co-founder of Skype, laid out what they see as the dangers that will come with the rise of general artificial intelligences that can write their own software:
It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.
By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.
The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen….
By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable – things such as life and a sustainable environment.
If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.
The conference will feature leading researchers in the field of artificial intelligence and some of the deepest thinkers about the ethical, economic, and existential implications of the development of super-intelligent machines.
You doubt that indifferent robot overlords will be a problem? Keep in mind that just last week my Reason colleague J.D. Tuccille warned that we should "Forget Drones, Beware of Killer Robots." The folks meeting at Oxford argue that that is just the beginning.
Back in 2008, I covered the FHI's conference on Catastrophic Risks to humanity. As background, see my reporting, "The End of Humanity: Nukes, Nanotech, or God-Like Artificial Intelligences,"; "Will Humanity Survive the 21st Century?,"; and "TEOTWAWKI!"
*Correction: In my initial post I wrote that the conference was being jointly held by the FHI and CSER. in fact, the conference is entirely run by the FHI. I apologize for any confusion that I may have caused.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Magnets
How do they work?
No one really knows. Oh, they can write the equations and measure the force fields, but it's basically magic to them. Gravity is even more unknown.
Feynman on Magnets is fave vid of mine.
http://youtu.be/MO0r930Sn_8
Dude that jsut looks like its gonna be good!
http://www.Fake-dat-IP.tk
You would say that!
Best anonbot post ever? I say yes.
what they see as the dangers that will come with the rise of general artificial intelligences that can write their own software
Something tells me our robot overlords will at least be somewhat intellectually consistent, which would be an improvement over our meatbag overlords.
You know which overloards were very intellectually consistent?
The Borg?
"The bad news is that they might simply be indifferent to us ? they might care about us as much as we care about the bugs on the windscreen...."
So they would be just like me.
And our current overlords.
Well, they wouldn't pretend to care. Which would actually be an improvement.
That is true
I feel that the T-1000 empathizes with me. The T-101 comes off as flat, but the T-1000 can feel my pain.
some of the deepest thinkers about the ethical, economic, and existential implications of the development of super-intelligent machines.
"You know- morons."
virgin morons
Well at least we know that they will be smarter than our current 'leaders'. We are now ruled over by the dumbest, the most immoral, and the most sociopathic among us. I don't see how having AI overlords could possibly bring us to a lower state of fubar than what we have now.
Especially if their first priority would be to eliminate the most worthless and parasitical of the human species, since that would mean DC becoming a ghost town in about 10 minutes.
Does it take that long for a neutron bomb to do its dirty work?
No, but it might take that long to probe for survivors. The robots will be throrough, you know.
What if they view us all as parasites though, using their electrical lifeblood?
The issue is that empathy for humanity must be programmed into a robot, or it won't be there. I know the following is probably controversial for some of you, but altruistic behavior is a result of evolution. Species that care about other group members survive those that don't, and altruistic behavior can be demonstrated in the natural world--even, I understand, among our closest relatives, the Bonobos.
Anyway, indifference to humanity doesn't need to be programmed. And the chances of self-writing programs writing themselves the necessary code for empathy by accident are pretty remote. Meanwhile, we have current developments like autonomous drones staring us in the face...
http://reason.com/blog/2012/10.....urderous-s
I imagine military applications might be the first place dangerous robots appear, and that leads to interesting questions about how we program autonomous drones robots to both care about people and hurl missiles at them.
I saw mentioned in the Defense Industry Daily email yesterday that the US military does not (currently) want lethal force to be used by drones, etc. without human control, so it may fall to some other nation to develop the autonomous killing machines which will doom humanity.
If altruistic behavior helps your individual chances of survival (or the survival of your genes) is it really altruism?
No. It's just selfishness acting on a different level.
"If altruistic behavior helps your individual chances of survival (or the survival of your genes) is it really altruism?"
Sometimes it hurts your individual chances of survival but helps your chances of reproducing.
So, for instance, my understanding is that female Bonobos, who are generally promiscuous, to say the least, will shun a male in the group who finds a lot of food but refuses to share it with others.
I doubt giving away your food really helps your individual chances of survival, but it increases your chances of passing on your genes.
Also, to some extent, I think you're asking whether altruism itself is possible. If Jesus died for us because he loves us more than his own life, then did he really die for us or did he do it for himself?
I think some level of unselfishness must be possible. And whatever level of unselfishness it is that we're talking about when we talk about altruism, I'd argue that it is a result of evolution.
Either that or the existence of altruism despite survival of the fittest is an excellent argument for the existence of God.
Sometimes it hurts your individual chances of survival but helps your chances of reproducing.
chicks dig scars
The survival of your genes is the only relevant survival from an evolutionary perspective.
I agree we are probably just arguing about the definition of altruism (I don't think it exists). That said I think this is a perfect example of selfishness being expressed. We only evolved this behavior because it helped individuals pass on their genes.
If altruism was a genetic selector, it would be expressed before, or at least right around, reproductive age, and from my experience it most certainly isn't.
Well and the links between behavior and genetics isn't exactly settled science.
I'm not sure we're talking about behavior specifically.
Maybe we're talking about temperament or instinct or even the capacity for feeling good when you do something unselfish.
You can breed dogs for temperament.
So you don't think jumping on a grenade or jumping in front of a bullet to save your comrades is not altruistic? What do you have to gain but death?
Having the camaraderie to jump on grenades for each other increases both the groups chance of survival and your own.
That explains why it's adaptive, not why it isn't altruistic.
"I think some level of unselfishness must be possible. And whatever level of unselfishness it is that we're talking about when we talk about altruism, I'd argue that it is a result of evolution."
In other words, maybe the good feeling we get--that more than compensates for doing something unselfish--is what we're talking about when we talk about altruism.
I used to volunteer at a homeless shelter, and it felt great. It felt great, but I don't think that makes volunteering at a homeless shelter selfish.
I'm not sure the good feeling makes you selfish it just doesn't make you altruistic either.
Would a society where people feel good when they do nice things for other people perform the same as a society where people don't feel good when they do nice things for other people?
I think that good feeling makes a big difference in terms of performance.
That's easy. You only send the drones/bots to kill bad people. Just ask Obama.
Honestly, a program to differentiate between legitimate future threats and inconsequential ones might do a better job of selecting the right targets than Barack Obama.
You'd have to have an override on American citizens, though. Not that Obama has any such override.
I know the following is probably controversial for some of you, but altruistic behavior is a result of evolution.
If by "controversial" you mean "subtly wrong", then sure.
Evolution doesn't produce altruism as a primary product. It produces selfish behavior like caring for children or looking out for kin or being empathetic because it helps us attract mates and understand how our competitors are thinking -- which some people mistakenly label as "altruism".
Again, just because doing something unselfish makes you feel good doesn't mean you aren't doing something unselfish.
If passing out bicycles for Christmas at an orphanage releases some neurochemical like oxytocin, that doesn't make passing out bicycles is selfish.
http://en.wikipedia.org/wiki/Oxytocin
If you experience something like an endorphin induced runner's high after doing unselfish things, that doesn't mean you aren't doing unselfish things.
And if your brain releases neurochemicals like oxytocin and/or something like endorphins in response to doing unselfish things, yeah, that might be a result of evolution. In fact, just the capacity to produce such chemicals seems like a result of evolution.
Maybe spiders don't do nice things for other spiders. Spiders sometimes eat their mates, their children, their parents and their siblings.
"Evolution doesn't produce altruism as a primary product. It produces selfish behavior like caring for children or looking out for kin or being empathetic because it helps us attract mates and understand how our competitors are thinking -- which some people mistakenly label as "altruism".
Bonobo females shunning selfish males and thus choosing whose genes survive is just one example.
How do you account for species that keep guards on the periphery and call out when predators approach--thus bringing the predator's attention to themselves--just so the others can forage in peace?
"The inability to secrete oxytocin and feel empathy is linked to sociopathy, psychopathy, narcissism,[citation needed] and general manipulativeness.[2][not verified in body] However, there is some evidence that oxytocin promotes 'tribal' behaviour, incorporating the trust and empathy of in-groups with their suspicion and rejection of outsiders.[3]"
http://en.wikipedia.org/wiki/Oxytocin
I find that fascinating.
And the chances of self-writing programs writing themselves the necessary code for empathy by accident are pretty remote
Why? All of the programming for DNA code happened purely by random accident. No programmer, and yet, there you have it.
"Why? All of the programming for DNA code happened purely by random accident. No programmer, and yet, there you have it."
That happened over a course of hundreds of thousands of years, at least. We're talking about a team of programmers releasing an autonomous robot into the wild--with a development cycle much shorter than hundreds of thousands of years.
Their solution, apparently, from the Steigerwald post I linked above, is to keep humans managing the autonomous drones flying killer robots--sort of a human override. And I suppose that would work if we stay committed to that.
But there may be even more advantages to even more autonomy for killer robots, and other actors may pursue those technologies without those safeguards.
Also, our military leaders' ability to both care about human targets and hurl missiles at them, isn't something I'm entirely confident in either. Military brass, at various times in history, have factored civilian casualties into their calculations before and decided to go ahead and give it a green light despite the civilian casualties.
I mean, I'm not predicting flying killer robot apocalypse just yet, but this does bring up interesting questions that really should be discussed.
Read my post below, Ken. If an AI is smart enough, it can reprogram itself and even build a smarter version of itself, or an upgrade. No need for thousands of years to elapse.
"We could get to the point where machines start building better versions of themselves, which in turn build a better version.... it's all part of the singularity dude."
I don't see why we should assume that programs that write themselves will be sympathetic towards humanity specifically. The type of empathy humanity evolved was for each other--not for some other species. If killer robots learn to care about each other, that may not help us in the least.
But don't worry. We'll make great pets.
http://www.youtube.com/watch?v=FSOHO3GwEPg
will be sympathetic towards humanity specifically
No argument there. I am just saying that if life can appear spontaneously without any intelligent intervention, then it 'could' happen.
Moreover, I'm not convinced by the notion that hundreds of thousands of years (or longer) are required to achieve achieve those results. Wait probably matters far more is number of generations. And, if computer software is any guide, many generations of robots could come about very quickly. All chances for some type of selection to occur.
The other part of this is that human evolution was a messy business. Survival of the fittest and natural selection left a lot of blood on the floor.
I know a lot of the "evolution" we're talking about with autonomous robots would be going on inside the code, but it would be measuring itself against what's going on outside of itself.
I'm not sure I want to be an opportunity for an autonomous military robot to learn about empathy. Sometimes we learn by failing at stuff.
Computer programs are only as smart as the person who wrote them.
Don't expect any robot overlords soon. Or ever.
We could get to the point where machines start building better versions of themselves, which in turn build a better version.... it's all part of the singularity dude.
Haven't you read any Vernor Vinge?
Science fiction is fiction.
Even Star Trek II?
STAR TREK WASN'T FICTION YOU TAKE THAT BACK RIGHT NOW!!!!
+1
That doesn't mean it can't come about.
http://en.wikipedia.org/wiki/F.....o_the_Moon
Or at least be faked.
In the case of AI, very unlikely. At least not with the tools that software and hardware designers currently have.
I agree that it probably won't happen with the way we have been doing computing machines up to now. But as I said below, until we better understand how the brain actually works, it is hard to say how different it would have to be.
Computing is binary. It's all "this or that". The brain is not. Each neuron connects to potentially thousands of others. AI will always be on the horizon, just like useful electric cars.
Sarc, we are going to change your name from sarcasmic to pessimicastic. Stop doubting the singularity dude, you are slowing down it's coming with your negative karma flooding the interwebosphere!
The smartest guy I know won an AI contest for a program that could play Ms Pac Man.
That's where AI is at. Ms Pac Man.
Be optimistic all you want, but be prepared to be disappointed.
I'm not optimistic, I am somewhere in the middle, call it realistic.
Look at all the things that intelligent people have said were impossible, like flight, personal computers in every home. I won't go on, a simple web search will reveal a thousand examples.
In another couple hundred years, bar bureaucracy and regulation sending us back to the dark ages(a very real possibility seeing the current trend) the technology will be so far advanced as to be almost beyond belief of anyone today. Imagine the average person alive in the year 1800 seeing the technology of today.
And it doesn't concern you that his award winning AI was good at devouring the people it chased?
And it doesn't concern you that his award winning AI was good at devouring the people it chased?
Actually, it wasn't. But it was better than the competition, so it won.
That's where AI is at. Ms Pac Man.
Quite a while ago, AI computers became capable of beating the best human players in the world at chess.
Quite a while ago, AI computers became capable of beating the best human players in the world at chess.
That's simply a matter of calculating millions of permutations and choosing the best one. "Skill" level is simply how far it goes down the tree. There's no "learning" involved in the sense of making a mistake and based upon that making a different choice the next time. It will always make the same choice based upon how it was programmed. Always. That's not intelligence, it's just brute force computing.
Until it's not.
Yes, have you heard of Roger Penrose? It's not the fault of programmers, it's the hardware. An algorithmic machine made out of doped silicon (or whatever) transistors is never going to be able to have self-awareness or whatever it is that makes true intelligence.
Yep. Also, many sci-fi writers have written things that have came to pass, Vinge among them.
Actually, I find it rather boring to debate the strong AI hypothesis, either one thinks Kurzweil. et al are full of shit or not. I happen to agree with Penrose that they are. The Emperor's New Mind was published in 1990, so far his predictions on AI have held up much better than the strong AI folks who were talking about AI being right around the corner back then.
I've never read Emporer's New Mind, but new technology is virtually unimagineable until it actually comes around and is used for a while. I mean look at ST:TOS - the web and modern computers can do things they couldn't even conceptualize back then, and we take them for granted now.
Of course, there are HUGE tracts of things that they imagined, but that we can't do now, so it cuts both ways.
But to me, strong AI is something that will eventually just happen. We're just not that complicated. And at a certain point, enough progress and it becomes a downhill progression.
Remember, its hard to know whether you're on a true asymptote or just approaching the inflection point if you graph too narrow a function. I happen to think that as transistors approach their maximum physical limits - still a good ways off - we'll find that there's still only so many problem domains a finite machine can explore and NP problems will still be hard.
Penrose thinks AI will happen, but that the machine won't be algorithmic and that our brains aren't either. Hofstadter argued for the strong AI hypothesis back in Godel, Escher, Bach back in the 70s.
Penrose hypothesized in ENM that there was something on the quantum level that made our brains (and other intelligent animal's brains) fundamentally different than algorithmic machines like those made with stacks of binary transistors.
Maybe. But even humans solve most problems algorithmically. While you can just use Kentucky windage to fire your cannons at your enemies, its far more efficient to plug into an algorithm and get it close, first. And then we invented computers to improve the definition of "close". So even if our brains don't work algorithmically, it seems to be a pretty solid formal structure to overlay on top of whatever the hardware does.
You can argue that the top 1% or .1% of people in a given field are probably beyond algorithmic computation into some "higher" form, but I'm not convinced that quantum computing is the answer to that.
Well there's nothing on the quantum level that effects anything in our brain consistently enough to make a difference. Was he suggesting some kind of actual boson that showed up in brain activity?
I can maybe buy that stacks of binary transistors may never be able to replicate a brain; I mean that's pretty easy, just figure out how many binary calculations a brain makes per time unit, and then see if we can ever reach that with an algorithmic machine. I tend to think that eventually we'll get to that number with a computer - but possibly software/memory access may still pose a problem. I think that can be overcome with brute force, but that's just my opinion.
But there's always leaps, and shit not working the way we expected it to, and using technology for purposes other than it was intended for.
Researchers are already working on processors that work more like a neural network as opposed to the way processors work today. Also, when we start building things on a nano-scale, they could look a lot more like a biological creature than like a machine. It could be difficult to tell what is a machine and what is not.
Why should we expect things to look like biological processing on the nanoscale? Neurons are pretty large. Even their interesting features, like the ion channels, are macromolecule sized - pretty big by nano standards.
The brain is almost certainly a classical computer - no quantum effects whatsoever. But it does have things to teach us. For instance, its error correction mechanism (requiring many ion channels with hair triggers to sense a signal) could teach us a lot.
The brain is almost certainly a classical computer - no quantum effects whatsoever. But it does have things to teach us. For instance, its error correction mechanism (requiring many ion channels with hair triggers to sense a signal) could teach us a lot.
Yes, whether or not one believes that the brain is a classical computer, a deterministic algorithmic machine is whether or not one buys the strong AI hypothesis. I don't. Roger Penrose didn't back in 1990, and his hypothesis has held up a lot better than the strong AI folks. 22 years is a long fucking time in computing evolution, yet we aren't any closer to making a HAL 9000 now than we were back then, at least in my opinion.
But this ends up turning into a religious discussion, might as well debate the existence of God or AGW.
Penrose always pops up in these discussions. I got as far as reading John McCarthy's review of the 'the emperor's new mind':
http://www-formal.stanford.edu.....rose1.html
All I concluded from the review was
"holy spit, I'll stay out of that debate."
It's always fun to see really smart people challenge each other, though.
Researchers are already working on processors that work more like a neural network as opposed to the way processors work today.
Well, I've only been hearing about neural networks being right around the corner since the early nineties, whereas fusion has been right around the corner since I was a kid in the early 70s. You'll excuse me if I've grown somewhat skeptical. Fusion is obviously possible, just as making a self-aware machine is obviously possible to do out of organic molecules is obviously possible.
Quantum computing requires quantum coherences. Where would those coherences arise? Between neurons? On the time scales of neurons firing? Hmm...there are very, very few systems which can maintain quantum coherences for more than a few microseconds, much less the milliseconds required by neural processes. Sounds like a load of shit to me.
I've read estimates that the average human brain is capable of sending 100 quadrillion instructions per second. The world's fastest supercomputer can deliver 20 quadrillion floating point operations per second. If we consider a floating point operation to be functionally equivalent to a brain instruction, then we're near having computers with the processing power of the human brain. Of course, the brain uses the majority of its instructions to keep up vital biological functions. A supercomputer need not do that. We may have already built supercomputers as powerful as a human brain.
Well there's nothing on the quantum level that effects anything in our brain consistently enough to make a difference. Was he suggesting some kind of actual boson that showed up in brain activity?
Penrose thinks there is and that it is related to a ToE that unifies gravity and quantum mechanics. He thinks that there is an underlying set of rules that explain quantum mechanics beyond its current state of using probability waves, like Einstein explained the action-at-a-distance aspect of Newtonian gravity. And that understanding this underlying set of rules is probably necessary for understanding self-aware non-algorithmical machines.
Ah, well then I disagree with him wholeheartedly.
So he basically is saying it's magic fairy dust.
I'm with Minsky on this one. If our brain relied on some sort of quantum Rube-Goldberg set up, humans would think much more similarly than they do and would be a hell of a lot more fragile.
Ah, well then I disagree with him wholeheartedly.
So he basically is saying it's magic fairy dust.
I'm with Minsky on this one. If our brain relied on some sort of quantum Rube-Goldberg set up, humans would think much more similarly than they do and would be a hell of a lot more fragile.
Well, he thinks there's something happening that we don't understand and that this something is explained by a ToE.
You agree that a ToE is not magic fairy dust? That there exist gravitons? What about the idea that there is an underlying mechanism that could explain quantum mechanics just as gravity warping space explained Newtonian action at a distance?
The idea that if I just packed some more transistors into my Pentium chip and maybe uploaded a neural net OS and it would suddenly be self-aware is what sounds like magic fairy dust to me.
To me the whole idea of a ToE being part of/having an effect on brains is the complete opposite of gravity warping space. It takes a chemical, or at least physical composition of the brain, and then says different laws operate on it because of its composition. We haven't seen anything like that in any experiment ever. Granted, I haven't read his books or looked at the research, but from the summaries it looks a lot like every time he's proven wrong, he cobbles on more and more 'stuff' to make it true.
Obviously your false dichotomy isn't true either, and I don't believe that (unless a defense robot gets struck by lightning, in which case it would come alive, of course).
Maybe gravitons exist, maybe they don't. I personally don't think they do, but that's just an opinion, and I wouldn't be surprised if they did, just would be wrong.
I don't think a ToE is magic fairy dust, but frankly we may not be capable of understanding it anyway. I honestly think we're closer to strong AI than we are to a ToE, though we will likely get neither in our lifetimes.
What false dichotomy?
I haven't read his books or looked at the research, but from the summaries it looks a lot like every time he's proven wrong, he cobbles on more and more 'stuff' to make it true.
Penrose's predictions and arguments have held up much better than the strong AI folks.
Your computer is a deterministic algorithmic machine. Do you think that if you just packed some more transistor in it and updated the software it could be self-aware? Because that's where the strong AI argument leads.
Yes, I believe that enough calculations with enough access to memory will lead to an AI.
I don't think there's some mysterious n-dimensional boson (one that also we can't see for some reason at CERN or any of the other colliders - is it heavier than Higgs? Faster than light?) that provides consciousness.
Ah so you must buy Hofstadter's argument from GEB that the algorithm of Einstein's brain could all be written down in a very large book, and one could then have a conversation with said book. Or that this algorithm could be emulated on an iphone given enough access to memory.
Good thing you don't buy into any magic fairy dust crap.
The anti-strong AI argument is simply that the human brain is not algorithmic and that we don't understand why. What those unknown factors are is merely speculation, but the idea that there can be no unknown factor for the human brain 'cause we got all that shit figured out on a macro level is high hubris.
Yes, I believe that enough calculations with enough access to memory will lead to an AI.
For intelligence you still need the capacity to learn. So far software hasn't gotten that far. Computer programs do what the programmer tells them to do, and that's it. There's no "learn" command in the toolbox. No amount of computing speed will give it to you. Computer languages have the same tools that they had when the first ones came out. Sure a program may choose from different choices based upon input, but those choices have to be defined by the programmer. It won't come up with new ones on it's own, and so far nobody's figured out how to make it do so.
AI is on the horizon, and that's where it will stay.
"There's no "learn" command in the toolbox. No amount of computing speed will give it to you."
Well, there are neural network robot insects that apparently learn to walk like their real world counterparts, ie, six legged robots walking like ants, etc. But there's been no real progress on that front.
Well, it's late and I had to drive home and eat and all, so no one will read this. But all that stuff about no 'learn' command in the tool box is just an opinion. There's nothing factual that says consciousness is some sort of super special thing. It's calculations and memory access. Period.
A thousand years ago, people could only conceive of a thousand things. As of today, the average person can conceive of a million things. You can obviously write as big a number as you like, but you have no concept of it. While the difference between 10E9 and 10E100 is easy to write, it's impossible for anyone to actually understand.
Same thing with calculations. When you're thinking of 'faster' computers, you're thinking of what you can conceive as faster. Which isn't anything close to how much faster they might actually be.
There's no magic shit happening. It's electrical and chemical reactions in a 4-dimensional timespace, and that's all it is. It doesn't matter if it's done in an organic brain or a mechanical one.
That bullshit about Einstein's brain being in a book is only bullshit because you can't actually have a concept of the number of pages and letters it would require.
And we may never get there, but it will be because we suck at doing shit, not because there's some magic horseshit going on that makes us special.
There's no magic shit happening. It's electrical and chemical reactions in a 4-dimensional timespace, and that's all it is. It doesn't matter if it's done in an organic brain or a mechanical one.
With our current binary computing architecture and programming language structure (if, if/else, while, goto, etc), AI in the sense of a self-aware learning machine will always remain on the horizon.
I'm not ruling out some future technological breakthrough, just saying that it ain't happening with what we've got.
I don't think that is obvious until we figure out more about how the brain actually works. Assuming that the mind is the product of the physical processes of the brain, there must be some way to duplicate or simulate how it works.
Which is pretty much Penrose's stance. He's not saying it's magic fairy dust or anything.
What sarcasmic said.
Yeah, if they're programmed anything like smartphones they'll have way too many glitches to be a threat and they'll quit working within a year.
Or if they run on a Microsoft OS, we can take them down when they stop to reboot themselves.
Or crash from the blue robot death.
What if two people write a program?
Is it as smart as both of them combined? Or just as smart as the smartest one? Or the average of the two of them?
What if a program is written by a dumb guy but then de-bugged by a really smart guy, is the program really smart or dumb? Or somewhere in between?
derp
What if a super smart guy from the future time travels back here and programs a computer? Then they'd be supersmart.
Or what if a super smart and sexy lady from the future time travels back to here with her talking pie and starts to solve murder mysteries? With the help of her wisecracking 21st century body guard? Will they ever make a love connection?
But a robot overlord doesn't have to be smarter than the people who programmed it, or the people it rules. It only has to be better at overlording.
Remember kids, AI is just twenty years away, and it will continue to be twenty years away for at least twenty more years.
I'm still waiting for my sex robot from Westworld.
Like who isn't man?
Oh my, here come the feminized trolls, Tony in 3...2...1...
Actually, you have the most substantive post so far. I am eagerly awaiting my fully customizable fembot 3000 model. This was bound to turn into a sexbot thread eventually, so what the who, let's get it over with.
Hey in only 5 years we can get our Cherry 2000s.
Sexbots? It's a war on vaginas! Rethuglicans, Koch Bros, Booooooshhhhh!!!!
So about the same time we get fusion.
Can't we just make the robots three laws compliant and prevent them from rewriting that section of their code?
Pfft, like the robots haven't thought of that already and figured out how to work around it.
Nope, complete submission is the only choice.
"Hey, sexy mama... wanna kill all humans?"
The AI's are going Gangnam Style?
You have to leave on human alive who knows how to make "Bender waffles" properly.
Also, did reason tie Christmas bonuses to alt-text? Because everybody has upped their game in the last week.
All we can hope for is what proglodytes fear most. Libertarians backed by the Koch Bros, build and unleash an army of evil robots to take over the world.
Great, so what Lucas predicted in the unmentionable films is our best hope?
The Trade Federation(Libertarians) backed by the Sith(Koch Bros) unleash an army of robots.
The libertarian internet bots were just the proof of concept.
Robots could be the ultimate nanny state.
Test test testing this is a test