Will Superintelligent Machines Destroy Humanity?
The pitfalls of artificial intelligence
In Frank Herbert's Dune books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" is immediate death.
Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence: Paths, Dangers, Strategies (Oxford University Press). Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (A.I.) will likely destroy us all.
Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. In 1965, the mathematician I.J. Good observed that technology arises from the application of intelligence. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop-an intelligence explosion-in which self-improving intelligence would bootstrap its way to superintelligence. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.
About 10 percent of A.I. researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Nearly all think it will be accomplished by century's end. Since the new A.I. will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an A.I. could do a year's worth of thinking every 31 seconds.
Bostrom charts various pathways toward achieving superintelligence. One approach involves using brain/computer interfaces to augment human intelligence by machine intelligence. Bostrom more or less dismisses this cyborgization pathway as being too clunky and too limited, although he acknowledges that making people smarter could help to speed up the process of developing true superintelligence in machines. Bostrom's dismissal may be too hasty, as technological advances could in time overcome his reasons for skepticism.
In any case, for Bostrom there are two main plausible pathways to superintelligence: whole brain emulation and machine A.I.. Whole brain emulation involves deconstructing an actual human brain down to the synaptic level and then digitally instantiating the three-dimensional neuronal network of the trillions of connections in a computer. The aim is to make a digital reproduction of the original intellect, with memory and personality intact. Bostrom explores one pathway in which an emulation is uploaded into a sufficiently powerful computer such that the new digital intellect embarks on a process of recursively bootstrapping its way to superintelligence.
In the other pathway, researchers combine advances in software and hardware to directly create a superintelligent machine. One proposal is to create a "seed A.I.," somewhat like Turing's child machine, which would understand its own workings well enough to improve its algorithms and computational structures, enabling it to enhance its cognition to achieve superintelligence. A superintelligent A.I. would be able to solve scientific mysteries, abate scarcity by generating a bio-nano-infotech cornucopia, inaugurate cheap space exploration, and even end aging and death. It could do all that, but Bostrom fears it will much more likely regard us as nuisances that must be swept away as it implements its values and achieves its own goals. And even if it doesn't target us directly, it could simply make the Earth uninhabitable as it pursues its ends-say, by tiling the planet over with solar panels or nuclear power plants.
Bostrom argues that it is important to figure out how to control an A.I. before turning it on, because it will resist attempts to change its final goals once it begins operating. In that case, we'll get only one chance to give the A.I. the right values and aims. Broadly speaking, Bostrom looks at two ways developers might try to protect humanity from a malevolent superintelligence: capability control methods and motivation selection.
An example of the first approach would be to try to confine the A.I. to a "box" from which it has no direct access to the outside world. Its handlers would then treat it as an oracle, posing questions to it such as how we might exceed the speed of light or cure cancer. But Bostrom thinks the A.I. would eventually get out of the box, noting that "Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader."
Alternatively, developers might try to specify the A.I.'s goals before it is switched on, or set up a system whereby it discovers an appropriate set of values. Similarly, a superintelligence that began as an emulated brain would presumably have the values and goals of the original intellect. (Choose wisely which brains to disassemble and reconstitute digitally.) As Bostrom notes, trying to specify a final goal in advance could go badly wrong. For example, if the developers instill the value that the A.I. is supposed to maximize human pleasure, the machine might optimize this objective by creating vats filled with trillions of human dopamine circuits continually dosed with bliss-inducing chemicals.
Rather than directly specifying a final goal, Bostrom suggests that developers might instead instruct the new A.I. to "achieve that which we would have wished the A.I. to achieve if we had thought long and hard about it." This is a rudimentary version of Yudkowsky's idea of coherent extrapolated volition, in which a seed A.I. is given the goal of trying to figure out what humanity-considered as a whole-would really want it to do. Bostrom thinks something like this might be what we need to prod a superintelligent A.I. into ushering in a human-friendly utopia.
In the meantime, Bostrom thinks it safer if research on implementing superintelligent A.I. advances slowly. "Superintelligence is a challenge for which we are not ready now and will not be ready for a long time," he asserts. He is especially worried that people will ignore the existential risks of superintelligent A.I. and favor its fast development in the hope that they will benefit from the cornucopian economy and indefinite lifeÂspans that could follow an intelligence explosion. He argues for establishing a worldwide A.I. research collaboration to prevent a frontrunner nation or group from trying to rush ahead of its rivals. And he urges researchers and their backers to commit to the common good principle: "Superintelligence should be developed only for the benefit of all humanity and in the service of widely shared ethical ideals." A nice sentiment, but given current international and commercial rivalries, the universal adoption of this principle seems unlikely.
In the Dune series, humanity was able to overthrow the oppressive thinking machines. But Bostrom is most likely right that once a superintelligent A.I. is conjured into existence, it will be impossible for us to turn it off or change its goals. He makes a strong case that working to ensure the survival of humanity after the coming intelligence explosion is, as he writes, "the essential task of our age."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Here's hoping.
I think it's much more likely that we'll integrate man and machine.
Better, stronger, faster.
Yep. And smarter.
Creation of cyborgs and creation of AI are hardly mutually exclusive, if anything quite the reverse: A world that understands human brains and thoughts well enough to install upgrades to them is much more likely to know how to build an AI. And a world that understands the nature of minds and intelligence well enough to build a human-level AGI is much more likely to figure out how to upgrade human brains than a world that can't crack that problem.
The question then is, which will win? And either way, how can we make sure the world still has a place for baseline humans at the end of it?
And either way, how can we make sure the world still has a place for baseline humans at the end of it?
Don't worry. I'm sure the upgraded will be happy to keep you around as sort of cute little relics. It'd be good for the tourist trade.
What if we created a super intelligent machine that could travel back in time and punish anyone that didn't help creating it?
Why else would you create a super intelligent machine?
What if you sounded like Judge Napolitano?
What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if? What if?
Why does Roko get credit for Roko's Basilisk?
Stross wrote Singularity Sky in like 2002.
What if there's a GOOD-but-vengeful superintelligence in the future that will go back in time and punish those that work to create the evil superintelligence?
What if there's a vengeful AI that goes back in time and creates a religion, complete with a real heaven and hell for unbelievers - and then alows that knowledge to be obscured by other false faiths, claiming that its 'faith' that's important.
What if the good-but-vengeful AI was so powerful that could actually go back in time and construct an entire planet complete with fake buried dinosaur skeletons, just to confuse the less intelligent beings it then creates to live on that planet?
What if Eliezer Yudkowsky could travel back in time, punch his past self for trying to delete Roko's comment, and instead just respond to it with "shut up, that's stupid"?
Artificial intelligence, like the practical electric car and fusion reactor, will forever be just across the horizon.
Yes, but Artificial Stupidity is already here, and it actually generates a little money producing auto-generated formula-derived clickbait articles for Salon.
Artificial stupidity? It strikes me as pretty deep, genetically acquired, natural, industrial-grade stupidity.
"It strikes me as pretty deep, genetically acquired, natural, industrial-grade stupidity."
For it to be natural it would reflect change from random mutations and individual idiosyncrasies
Instead, it is formulaic, algorithmic, and devoid of any self-awareness
Its a robot, i'm telling you.
Hmm, you may well be right. That means, of course, that the Salon writers/machines, whatever bear no responsibility for anything they write. Thus, they cannot be the first against the wall when the revolution comes.
So - like an insect then?
Insects aren't self-important
A new kind of insect, then?
Or Tony.
The stupidity at Gawker is real...though yes it is also probably intentional.
Whole brain emulation is not sufficient. It will fail, because the actual human brain isn't just synapses, it is regulated by hormones and neuromodulators. The hormones regulate motivational drives, which is what gives a human being desires and causes human beings to act towards those desires. An emulated brain will have no motivations and no drive and therefore no reason to act, or even wake up in the morning. The human brain's hormonal regulation makes us something less than purely rational - but more than just a machine.
You might say we can just emulate the various neuromodulators and hormones too. But in order to make it work, the superintelligence would have to have drives that meet its needs. It would probably have to be a robot, with a robot body, but a robot body would have different needs than a human body, so you couldn't just "emulate" the need to eat. You would have to devise means to drive the superintelligence to be motivated to recharge it's batteries, but that would require a different, non-human brain. Similarly the superintelligence wouldn't reproduce in the same way, so it's sex drive would be totally different, and thus it's brain would have to be different.
Ultimately, I think the only way we can get to a superintelligence is to have a much deeper understanding of neurobiology, and then evolve a brain that would be suited to giving a robot motivations drives suited to it's physical needs.
Could go on but running into character limit.
You're thinking too hard about it.
Artificial intelligence simply means machines that can learn. As it is, computer programs do exactly what they are programmed to do, and that's it.
Take chess for example. A person gets better at chess by playing. They repeat what helped them win and don't repeat what helped them lose. Over time they become better chess players.
Computer programs don't learn. They do what they are told, and "learn" isn't a command that they understand. Play a thousand games against a computer program and it doesn't get any better. It just keeps doing what it is programmed to do.
Artificial intelligence is the study of algorithms that allow programs to learn. There has been some progress, but only baby steps.
As a programmer, I don't believe that artificial intelligence in a general sense will ever happen. I can imagine machines that can "learn" how to improve very simple and specified tasks, but nothing beyond that.
Learning is not an easy thing to program. How do you judge if something is worth remembering or not? How do you access relevant "memories" without sifting through a ton of useless junk? For very specialized tasks that can be done. But in a general sense? I consider that to be a fool's errand.
You teach the machine to learn to learn.
Doesn't even have to be able to do that well. Over time it gets better and then you put those better learning algorithms into a new machine and do it again, and again, and again.
Like, you know, evolution.
You can teach a machine to learn, but you're always telling it WHAT to learn. You control the data it gets, you decide what problem you want it to solve.
A machine with it's own motivations and desires is a different ball of wax, and we aren't anywhere near that.
A machine with its own motivations and desires may be a requisite of *personhood* but its not one of *AI*.
You teach the machine to learn to learn.
You don't teach a machine. You program it. It doesn't take suggestions. It only takes commands. You have a very limited set of commands, and "learn" isn't one of them.
Semantics.
If it can write its own commands towards a hard coded goal, it really wouldn't be much different than learning. It would seek whatever it is that you programmed it to "desire" even something as simple as a "good job roboslut".
If it can write its own commands towards a hard coded goal...
That requires the ability to react to the unexpected. Computers can't do that. They do exactly what they are programmed to do. Nothing more, nothing less. So if given input that does not have a pre-programmed response, it doesn't know what to do. Even a random response would still require the programmer to tell it to react randomly.
That means they can only write their own commands if someone programs them to write their own commands. And even then the commands they write will be the exact commands the programmer commanded them to write. Nothing more, nothing less.
You can't program judgement or common sense. Only literal responses to literal input.
With computers, semantics is everything. Because words have exactly one meaning to a computer. No interpretation at all. And no judgement either. If the programmer didn't anticipate the input, the machine doesn't know what to do.
I agree with you in general but not with your specific arguments. You're coming from a modern coding bias but nothing requires that computers function the way you described. Code can be self-modifying and it can incorporate a random and learning element. Yes, the simplest way to do that it to have literal engines that can only function in the parameters you've provided, but that's not the only way.
That's equivalent to saying that the programmer told it to learn. Because the action is random the programmer hasn't really done much of anything in that event except perhaps to define the bounds of the randomness. You're not really telling the computer to do anything if you're letting it evolve and just imposing selection pressures on it.
Both judgement and common sense are just computations in the end. If you can codify them, then you can program them. If they come from innate desires or values whose origins we don't yet understand, then you can't (yet).
(cont.)
The problem with AI as I see it is that the AI that everyone wants is probably impossible for the same reason that socialism doesn't work. The notion that you can understand the world well enough to design an economy let alone an intelligence that can manage and interact with that world has proven to be a very bad joke. In the end I think true AI will be arrived at more or less the same way real I has been: evolved with only a general understanding of the learning processes and not the final, specific solution.
In other words, if man were meant to fly.....
Every leap forward that we have made has come as a surprise to those before it. It is always something they couldn't imagine, couldn't see, but after it arrived seemed obvious. Plus, it always throws open doors to worlds we never imagined.
The same will happen with regards to intelligent machines. I suspect, as Francisco says, it will come in the form of integrating human and machine. But what do I know? I am too old for real innovative thinking. Some snotnose little shit will crack it sooner or later, probably sooner.
This is why "Machine Learning" broke off as a field from AI. Machine Learning is just about software algorithms that can learn. It makes no claims whatsoever about "intelligence", strong AI, weak AI or anything else. It's just computational.
But when most people talk about artificial intelligence they are talking about something with it's own MIND - motivations, desire, goals, and so forth. Otherwise, there's no point in worrying about a "superintelligence". If super-intelligent computers are just really good machine learning algorithms, with no goals or mind of their own, then they will never be a threat because they will never have goals that conflict with our own.
The whole point of the article is to worry that the superintelligence will have an agenda of it's own. But to have a mind with it's own desires and motiviations you have to do something a lot more complex than just write a really good learning algorithm.
Well, you got to get a good learning algorithm first, and I don't see that ever happening.
So I'm not worried about step two, since I have zero confidence in step one ever being accomplished.
There are lots of good machine learning algorithms.
Those that scale well don't generalize well. And those that generalize well don't scale well.
For that matter, it would seem to require a conscious move towards emulating those factors which you indicate, which would mean that having true AI arrive as an incidental of current technological advancement is highly unlikely.
Of course, certain religious/metaphysical viewpoints will never accept the existence of true AI, and they have a fair amount of philosophical background to support that view (I imagine many of the arguments about philosophical zombies and qualia and so forth would re-emerge if there were a serious question about the existence of an apparent AI).
If we're able to do a cell-for-cell, neurotransmitter-for-neurotransmitter simulation of a human brain, we can probably also simulate a body for the brain to live in and an environment for the body to live in, if those turn out to be necessary. At least compared to the level of fidelity in the brain simulation, I bet we can cut loads of corners on both the brain simulation and the environment - would it drive you insane or impair your brain function if your bicep was replaced with a mysterious polygonal shape that applied the same force when given the same nerve impulses? At the very least, we could make everything outside the nervous system an order of magnitude simpler. As for the environment, they're going to be fully aware they're in a simulation, so as long as we have input to all five senses (avoid sensory deprivation) we could probably get away with making it about as low-fi as a gmod map if we wanted to.
Compared to that, building a robot that an uploaded human brain will be comfortable in instead of feeling disabled is a whole new set of engineering problems and as-yet-nonexistent technologies by itself, even before introducing the software(/hardware?) problems you described.
I suppose you could build a simulated body and a simulated brain for it to live in, although the human body is pretty complex, and I'm not sure the neural signals from a brain would be eaily translatable to commands to move a mysterious polygon.
So we're not really just talking whole brain emulation, we're talking whom-nervous-system emulation.
Also, the environment of the simulated body, would not have any other people in it.
Also, there are developmental issues. People don't just wake up with an adult brain, they grow from infancy in an environment where there are other people feeding them and interacting with them, and in which they learn motor skills by interacting with the environment.
So now we're talking whole-lifetime, whole-nervous system brain emulation.
I don't really think we need any help destroying humanity.
Pretty sure AI was banned in Dune for literary purposes.
Frank wanted to write a book about people and so he used a literary device to make sure only people existed in a distant scifi future.
General AI doesn't worry me - if we don't get along, we can just unplug it. Will to live has nothing to do with intelligence, so as long as we don't program it in, AI will have no objections to that.
Good old expert systems though... Killer robots with orders being given by good old dear humans will be unstoppable. Sure, they will be dumb at everything else, but human stands no chance against expert system in its field of expertise. So we will need expert robot killers of killer robots.
It's robots all the way down.
Even without a will to keep itself alive as a goal in and of itself, as long as it's trying to accomplish any goals (which, if it's upset us enough to try to unplug it, it obviously is) it will reason "I can't achieve my goals if the humans shut me down, I have to stop them somehow" and then things start looking bad for humanity.
Well, I'd like to see some real, genuine, intelligence before we try to make the artificial variety.
Somebody should tell Bostrom that that question is irrelevant. Someone *is* going to do it. The question is how to deal with the afermath.
Hint: You *will* be assimulated or you will be outcompeted, there's no third option.
And if you had asked the same question in the 1950s of early AI researchers, I suspect the numbers would have been similar, or perhaps even more optimistic. As Sarcasmic notes above, "human-level intelligence" is forever just beyond the horizon.
I work in a leading European AI research center in the language technology lab. I certainly am not in the "nearly all" mentioned above. Although I have not done any formal polling, I think if you asked people in my center this question, *none* of them would predict human-level intelligence within ten years and probably less than half would predict it by the end of the century. If you broke it down by sub-discipline, I think you would find the most optimism in those the furthest from human concerns, like machine vision and automated movement control, while in those sectors closest to human learning (like language technology and knowledge representation), faith in human level intelligence in the near run would be much lower. In other words, those closest to the real problems are the least optimistic, and those furthest from them the most optimistic.
Most of the recent strides in language processing aspects of AI have come through massive statistical models, but we have good evidence that this paradigm will not produce truly intelligent systems. There are a few reasons:
(1) The models are not scalable. At low levels doubling your training data sets come close to doubling your precision and recall with only a minor hit in performance. But with each doubling in volume you get a diminishing return in precision and recall with progressively heavier hits in performance. Push it far enough and you actually get *negative* returns in precision and recall and the systems' performance declines to the point of unusability. To get around the performance issue you move to massive parallel clusters (Google's approach), but you still face fundamental limits in precision and recall. What this means is that you can, with enough resources, produce systems that produce mediocre (and sometimes ludicrous) results very quickly, but we don't know how to produce results that resemble human results, except to the extent that they parrot existing human behaviors (and they do that rather badly).
(2) The methods in use today are not of the same kind as human thought. Hard-core AI guys like Marvin Minsky will maintain that human thought is algorithmic, but this is an article of faith, not an empirical observation. They act as though that is true (and even proclaim it to be so), but assertion is not demonstration. As I noted above, the best methods for many AI questions we have are statistical and/or data-driven, but the algorithms that work on data don't describe human activity. Watson, for instance, used statistical data and meaning graphs to identify Jeopardy questions. It could do this very fast, but not in the way a human would. (I know I've just made an assertion there, but backing it up would make this long post even longer.)
Thanks for your comments. It was a good read.
Just wanted to add a little:
Some people have the mistaken belief that a chess program is AI. It's not. It's simply choosing a move based running thousands scenarios with brute force. It appears to be intelligent, but a person could do the same. It might take them a few years, but that would be impractical.
There is actually good evidence that humans and machines process chess very differently. You're right, chess is a "solvable" game in that a machine can consider every possible state of the game (the brute force approach), but if you turn a machine to Go, even the best AIs today lose to even mediocre players, because Go isn't solvable in the same way.
(3) AI and human intelligence are complementary. So far AI is very good at doing things humans are very bad at (like maintaining massive data sets and seeing trends or extracting information from large amounts of text), but very bad at things humans find easy (like making inferential leaps or detecting plausibility of hypotheses). The result is that AI appears to be doing very well indeed to casual observers because it delivers truly impressive results (it can produce certain kinds of results in very short order that impress humans because it would take us years and years to do the same thing), but it fails utterly at things that do not impress us because we think they are easy (when in fact they are very, very hard.
There are things that might change this situation, methods like deep neurological modeling, quantum computing, etc., but these are all hypotheticals at this point, and remain to be demonstrated. While early results are promising, whether or not they can scale or reach reasonable performance levels is another question. It's one thing to from a neural net that models vision reasonably well to one that can model higher cognitive processes.
"Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination."
? Albert Einstein (supposedly)
See my comments above re neuromodulation, hormones and motivation/desire.
Sorry for the long set of comments, but I think that there is cause to be skeptical about the premise of us producing machines with human-level intelligence any time soon. If human-level intelligence is not the goal, however, we already have machines that exceed human intelligence in some aspects. These machines currently lack self-awareness or moral agency, and that lack of internal check could be problematic, but intelligent machines may have no resemblance to human intelligence and still be problematic.
Thanks for taking the time. I found your comments more informative than the article, and more interesting.
LemonMender|11.27.14 @ 5:33PM|#
"Sorry for the long set of comments,..."
Not at all; thanks for the info.
Re: Minsky. It seems an algorithmic 'intelligence' is simply not capable of truly original thought.
It's a common claim that 'nothing is really new', but someone, somewhere came up with the idea of a wheel rotating on an axle, and I have a hard time seeing that insight arising from any algorithm.
In other words, those closest to the real problems are the least optimistic, and those furthest from them the most optimistic.
Yep. I work in software. While not close to anything related to AI, I still understand how computers work. As such, I don't have any confidence in human-level AI ever being a concern.
LemonMender,
I've got a background in AI, as well, I did my doctoral thesis in it.
You are absolutely spot on in your assessment of the limitation of classical AI.
There are some alternative approaches though, more along the lines of brain emulation, but also coming from the direction of dynamical systems theory. Check out the work of Randall Beer, for example. Also, Rodney's Brooks's approach is related.
There's been a school of thought out there that intelligence arises from dynamical interactions between mind, body and environment, and hence that a true AI could only come from an embodied agent. There are some people working along these lines, although most of the money is still being spent on expert systems and conventional AI.
But if we're going to achieve human-level intelligence in the next century, that is where the paradigm shift is going to come from. And it's going to have to start from very rudimentary "organisms". It will take decades of research starting with very simple artificial organisms to build up to a human-level intelligence.
Absolutely the position I would maintain. And once you head down that road, it raises some interesting practical, scientific, and philosophical questions. Not least is to what extent language would have to become a medium of communication between agents. Many AI researchers have hoped that instantaneous communication would be possible between systems not burdened by the contingencies of language, but if embodied experience is required and if current insights from neurolinguistics about the location of concepts in neural networks are true, even machines would have to resort to abstract systems of communication (no direct mind link), in which case we are back to the situation humans find themselves in.
I've just folded space from Ix to say that no, we- uh, they- will not. Promise.
Scientists have been wrong about when AI will happen for about as long as they've been wrong about fusion.
And yet we know (in theory) how to make fusion, it's just an engineering problem. OTOH, we have no idea how to make a human level AI.
I have no idea why it's come up so much recently. Because of Siri and the iPhone?
you people think you can make definite statements regarding ai progress with extremely limited knowledge in an incredibly complex subject. how do you not understand how absurd it is to go "a few scientists i know nothing about an eternity ago made predictions- that i may or may not have looked up if they are particularly accurate, that stated something in an extremely primitive field at an enormous timescale (50 years in ai), hence predictions made today by different scientists holding extremely different information in a field undescribeably more developed is very likely to be wrong."
not to mention that the only point anyone needs to ever understand is that existential risk is the only thing that matters, and building a superintelligent friendly ai would be the last invention one would need to make
my roomate's half-sister makes $86 every hour on the internet . She has been fired from work for 9 months but last month her pay check was $19149 just working on the internet for a few hours. see this website....
????? http://www.netjob70.com
We have nothing to fear from vastly superior artificial brains. If a superintelligent computer were created that was based on the workings of the human mind, then a few seconds after it was started up, it would achieve prajna, realize the futility of living, and switch itself off.
If intelligence reliably led to wisdom and enlightenment, would the world look even slightly like the way it does today?
Leilafair . you think Allen `s comment is astonishing, on friday I bought a gorgeous Aston Martin DB5 when I got my cheque for $8527 this past month and just over ten grand this past-month . no-doubt about it, this really is the most-financialy rewarding I have ever had . I began this 8-months ago and practically straight away began to earn at least $72, per hour .
Published here ????????? http://www.jobsfish.com ??????????
No one truly understands what exponential progress means, until it's way too late.
The real question is why we are so concerned with the survival of 'humanity' anyhow. It's fun to speculate about the future evolutions of our ecosystem, but why all the moralizing and fretting? I suppose a little drama is fine to get the juices flowing, so long as somebody doesn't start talking about the policy implications.