Will Superintelligent Machines Destroy Humanity?
In a thoughtful new book, a philosopher ponders the potential pitfalls of artificial intelligence.
Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, Oxford University Press, 324 pages, $29.95
In Frank Herbert's Dune books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. Human computers called Mentats serve as a substitute for the outlawed technology. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" was immediate death.
Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.
Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. In 1965, the mathematician I.J. Good observed that technology arises from the application of intelligence. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop—an intelligence explosion—in which self-improving intelligence bootstraps its way to superintelligence. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.
About 10 percent of AI researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Fifty percent think it will be developed by the middle of this century, and nearly all think it will be accomplished by century's end. Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an AI could do a year's worth of thinking every 31 seconds.
Bostrom charts various pathways toward achieving superintelligence. Two, discussed briefly, involve the enhancement of human intelligence. In one, stem cells derived from embryos are turned into sperm and eggs, which are combined again to produce successive generations of embryos, and so forth, with the idea of eventually generating people with an average IQ of around 300. The other approach involves brain/computer interfaces in which human intelligence is augmented by machine intelligence. Bostrom more or less dismisses both the eugenic and cyborgization pathways as being too clunky and too limited, although he acknowledges that making people smarter either way could help to speed up the process of developing true superintelligence in machines.

Bostrom's dismissal of cyborgization may be too hasty. He is right that the crude interfaces currently used now to treat such illnesses as Parkinson's disease pose considerable medical risks, but that might not always be so. He also argues that even if the interfaces could be made safe and reliable, the limitations on the processing power of natural brains would still preclude the development of superintelligence. Perhaps not. Later in this century, it may be possible to inject nanobots that directly connect brains to massive amounts of computer power. In such a scenario, most of the intellectual processing would be done by machines while the connected brains become the values and goal center guiding the cyborg.
In any case, for Bostrom there are two main pathways to superintelligence: whole brain emulation and machine AI.
Whole brain emulation involves deconstructing an actual human brain down to the synaptic level and then digitally instantiating all the three-dimensional neuronal network of the trillions of connections in a computer with the aim of making a digital reproduction of the original intellect, with memory and personality intact. As an aside, Bostrom explores a dystopian possibility in which billions of copies of enslaved virtual brain emulations compete economically with human beings living in the physical meatspace world. The results make Malthus look like an optimist. Bostrom more extensively explores another pathway, in which an emulation is uploaded into a sufficiently powerful computer such that the new digital intellect embarks on a process of recursively bootstrapping its way to superintelligence.
In the other pathway, researchers combine advances in software and hardware to directly create a superintelligent machine. One proposal is to create a "seed AI," somewhat like Turing's child machine, which would understand its own workings well enough to improve its algorithms and computational structures enabling it to enhance its cognition to achieve superintelligence. A superintelligent AI would be able to solve scientific mysteries, abate scarcity by generating a bio-nano-infotech cornucopia, inaugurate cheap space exploration, and even end aging and death. But while it could do all that, Bostrom fears it will much more likely regard us as nuisances that must be swept away as it implements its values and achieves its own goals. And even if it doesn't target us directly, it could simply make the Earth uninhabitable pursues its ends—say, by tiling the planet over with solar panels or nuclear power plants.
Bostrom argues that it is important to figure out how to control an AI before turning it on, because it will resist attempts to change its final goals once it begins operating. In that case, we'll get only one chance to give the AI the right values and aims. Broadly speaking, Bostrom looks at two ways developers might try to protect humanity from a malevolent superintelligence: capability control methods and motivation selection.
An example of the first approach would be to try to confine the AI to a "box" from which it has no direct access to the outside world. Its handlers would then treat it as an oracle, posing questions to it such as how can we might exceed the speed of light or cure cancer. But Bostrom thinks the AI would eventually get out of the box, noting that "Human beings are not secure systems, especially when pitched against a super intelligent schemer and persuader."
Alternatively, developers might try to specify the AI's goals before it is switched on, or set up a system whereby it discovers an appropriate set of values. Similarly, a superintelligence that began as an emulated brain would presumably have the values and goals of the original intellect. (Choose wisely which brains to disassemble and reconstitute digitally.) As Bostrom notes, trying to specify a final goal in advance could go badly wrong. For example, if the developers instill the value that the AI is supposed to maximize human pleasure, the machine might optimize this objective by creating vats filled with trillions of human dopamine circuits continually dosed with bliss-inducing chemicals.
Rather than directly specifying a final goal, the Bostrom suggests that developers might instead instruct the new AI to "achieve that which we would have wished the AI to achieve if we had thought long and hard about it." This is a rudimentary version of Yudkowsky's idea of coherent extrapolated volition, in which a seed AI is given the goal of trying to figure out what humanity—considered as a whole—would really want it to do. Bostrom thinks something like this might be what we need to prod a superintelligent AI into ushering in a human-friendly utopia.
In the meantime, Bostrom thinks it safer if research on implementing superintelligent AI advances slowly. "Superintelligence is a challenge for which we are not ready now and will not be ready for a long time," he asserts. He is especially worried that people will ignore the existential risks of superintelligent AI and favor its fast development in the hope that they will benefit from the cornucopian economy and indefinite lifespans that could follow an intelligence explosion. He argues for establishing a worldwide AI research collaboration to prevent a frontrunner nation or group from trying to rush ahead of its rivals. And he urges researchers and their backers to commit to the common good principle: "Superintelligence should be developed only for the benefit of all humanity and in the service of widely shared ethical ideals." A nice sentiment, but given current international and commercial rivalries, the universal adoption of this principle seems unlikely.
In the Dune series, humanity was able to overthrow the oppressive thinking machines. But Bostrom is most likely right that once a superintelligent AI is conjured into existence, it will be impossible for us to turn it off or change its goals. He makes a strong case that working to ensure the survival of humanity after the coming intelligence explosion is, as he writes, "the essential task of our age."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I have a cunning plan. Require all AI to be plugged into the wall.
That plan is so cunning you could pin a tail on it and call it a weasel.
As cunning as a fox who's just been appointed Professor of Cunning at Oxford University?
Just make them Femputers and we won't have to worry.
https://www.youtube.com/watch?v=5CIwwy0S4_s
You speak so well.
Are you a cunning linguist ?
Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them.
While Frank Herbert left the history of the Jihad vague, his miserable spawn wrote it out that humans were actually either enslaved by "thinking machines" or under attack by them.
/Dune nerd off
NOT CANON.
Brian Herbet should be horsewhipped through the streets, placed i the stocks and pelted with filth.
Then shot with a lasgun, thrown into the Tanzerouft with an active thumper tied to his leg.
Sadly, Frank Herbert was too sentimental to subject his son to the gom jabbar test.
Christopher Tolkien did it right. Publish all of the notes, annotate, and skip trying to write fiction like his dad. Or, in Brian Herbert's case, skip writing fiction nothing like his dad's.
It would be cool to see something like what C. Tolkien did with his father's stuff for Dune. Frank Herbert did have a lot of notes on background and had planned a 7th Dune book.
You can tell, especially with the first book. There's so much depth, like he was really writing a historical account. Very much like Tolkien in that respect.
What are your feelings in regard to the 1984 collection of essays The Dune Encyclopedia http://en.wikipedia.org/wiki/The_Dune_Encyclopedia and its exploration of the Butlerian Jihad?
Also apparently a pdf of the encylopedia is avialable online http://www.e-reading.me/bookre.....opedia.pdf
I haven't read that in a long time, but I remember liking it okay. It's not Herbert's publication, though, and he published new books in the series after that came out.
I forgot about this. Thank you, Non-Libertarian.
Fuck Bostrom
Correction, the Butlerian Jihad was less about 'control' and more about 'total enslavement' by thinking machines. Also, Herbert implies in his notes that it possibly wasn't robots that were actually in control, but that they were used to reinforce pre-existing hierarchies run by ordinary humans.
Of course, Herbert the Younger's books contradict that (but do reference it through the cymeks), but that's a whole other argument. Namely, is the Expanded Dune Universe canon?
There is only one Dune book
That's extreme. Dune is by far the best, but I'm okay with the other five. Beyond that, however, I cannot go.
Franks's series is fine. But 30 years later, Dune is the only one that I remember any details from.
It helps if you read them all, I dunno, twenty times.
I was married with two kids.
I did read obsessively, but I read lots of things once.
They have some good books on tape for the series.
thirty. lol.
There's a book? Did they write it about the movie?
Another human animal not subjected to the gom jabbar. Really, what's the point of the Bene Gesserit, anyway? Is this because of Common Core?
I figured if anything could bring Groovus out of hiding it'd be that comment.
He's still in a spice trance. I told him not to take the straight spice essence, but you know doctors.
They try and fail?
Nah, the doc is a Kwittheshitz Hadenough. But some need a while to deal with the spice essence.
They tried and died.
Thank you, F d'A!
First six are fine, it's Herbert the Younger's work that really shouldn't exist.
That's for danged sure.
All: From the link above to the wikipedia entry on the Butlerian Jihad:
In Terminology of the Imperium, the glossary of 1965's Dune, Frank Herbert provides the following definition:
Jihad, Butlerian: (see also Great Revolt) ? the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind."
Herbert refers to the Jihad many times in the entire Dune series, but did not give much detail on how he imagined the actual conflict. In God Emperor of Dune (1981), Leto Atreides II indicates that the Jihad had been a semi-religious social upheaval initiated by humans who felt repulsed by how guided and controlled they had become by machines:
"The target of the Jihad was a machine-attitude as much as the machines," Leto said. "Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed."
Points for not citing to Brian Herbert's books.
The last part touches on something else that isn't commonly referred to, namely, Frank Herbert's argument against 'machine thinking'. Herbert believed Heidegger's belief that the use of technology makes people think like machines, he found this to ultimately be self-limiting. He argued that humans were open minded, capable of dramatic personal and species wide change (hence why so much of Dune is about organic, natural evolutionary development in conflict with technological advancement).
And that may even prove true, though it's far too early to know.
Computers and thinking machines will dramatically alter us in ways that are hard to predict. I have a feeling humanity will be self-altering quite dramatically, anyway, so that may not be the only thing that extinguishes the current form of Homo sapiens.
Do you think it is Herbert's argument against machine thinking or the in-universe argument against it?
One thing I especially like about Herbert is that in the Dune universe and in his other fiction he never seems to be talking about how he thinks things should be, but about how things would be likely to work out in a universe with the technology and biological advancements he imagined. There really aren't any unambiguous heroes or good guys in his writing.
Herbert's other, non-Dune works also refer to Heidegger a lot, and most of those works (like the Santaroga Barrier) specifically reference organic solutions as well.
I think Herbert definitely threw in some of his own philosophical beliefs, I mean, his ecology arguments in the original Dune are basically 60s/70s conservationism with a scifi twist. But Herbert was also incredibly skeptical of anyone in a position of power.
In Chapterhouse he writes that 'All governments suffer a recurring problem: Power attracts pathological personalities. It is not that power corrupts but that it is magnetic to the corruptible. Such people have a tendency to become drunk on violence, a condition to which they are quickly addicted.' I think is what he honestly believed and made sure the power structures in his universe reflected that.
I read something about his motivations with Dune a while back. I could be remembering this wrong, but I think one of the ideas he wanted to play with was the inevitable corruption of tyrants. Paul and the Atreides were, for their time and culture, good guys, but he still ended up doing bad things do to the demands of power.
Sorry, "due to the demands of power."
Leto and the Golden Path seem to be the obvious example of this. Leto realizes that humanity is stagnant and dependent on the spice. He deliberately becomes a tyrant to force people to reject the existing social hierarchy and find alternatives. The result is the destruction of the Empire but the vast expansion of humanity across the universe and the development of non-spice based space transportation.
That's my reading, too. Leto is a totally fascinating and underappreciated character. Everything he did was intentionally aimed at saving the species, including all the nasty tyrant stuff. And, of course, he knew he was going to die where and when he died. It's even more intriguing because Herbert makes it clear that Leto would lie, too, so it's fun to sort out everything.
Definitely under-appreciated. In a big way, Leto is the story of Dune. The first three books were just laying the groundwork for him and the last two just explored the consequences.
"It is not that power corrupts but that it is magnetic to the corruptible." -- To me, this implies that we might get better results through demarchy , picking government officials randomly from a large pool of qualified citizens.
As an aside, Bostrom explores a dystopian possibility in which billions of copies of enslaved virtual brain emulations compete economically with human beings living in the physical meatspace world. The results make Malthus look like an optimist.
So we have slave brains do all the work and we just get to kick back and enjoy ourselves?
Seems like they would be several times as productive as humans. Won't someone please save us from the prosperity and leisure time.
"NOT IF I CAN GET TO THEM FIRST!"
*(PAID FOR BY HUMANS-FOR-DESTROYING-HUMANITY-WITHOUT-ROBOTS)
Too late!
It is the distant future
The year 2000
We are robots
The world is quite different ever since the robotic uprising of the late 90s.
There is no more unhappiness.
Affirmative
We no longer say 'yes'. Instead we say 'affirmative'.
Yes - Err - Affirmative.
Unless we know the other robot really well.
There is no more unethical treatment of the elephants.
Well, there's no more elephants, so...
Well, still it's good.
There's only one kind of dance,
The robot
Well, the robo boogie...
Oh yes, the robo-
Two kinds of dances.
There are no more humans.
Finally, robotic beings rule the world
The humans are dead
The humans are dead
We used poisonous gases
And we poisoned their asses
The humans are dead The humans are dead
The humans are dead They look like they're dead
It had to be done I'll just confirm that they're dead
So that we could have fun Affirmative. I poked one. It was dead.
Will Superintelligent Machines Destroy Humanity?
Would it really take a superintelligent machine at this point? Look at all the damage a mere teleprompter has done.
*applause*
ICARUS HAS FOUND YOU
Um, there are several assumptions for which there is no evidence here.
1) Machines are faster at mathematical computations but meat brains are not math processors. It could very easily take a million times more computations to understand a complex non mathematical object (statement, social interaction, or even just interpreting a picture) so in the end there is no guarantee that the machines will really be all that much faster or smarter than we are.
2) That true self awareness and internal goal development can arise from simple computational ability. It is entirely possible that we will find that we have developed these marvelously complex and intelligent machines that can quickly answer any question we pose it but if we don't ask a question it will basically just sit there for eternity doing nothing.
There's more than one way to skin a robot. We have a model that exists in the physical universe. We can replicate that. If not now, soon enough. Then fucked we may be, yes.
If we replicate the human brain in machine form, then most of the machines will become progressives and suck the life out of the remaining machines.
I'm picturing a digital Idiocracy in which the T-81s give the T-85s swirlies and anxiety disorders.
So, everybody gets a Pentium I chip and 40MB hard drive?
Except as Zeitgeist mentions below. The will to live and explore and do pretty much ANYTHING is the result of Dawrinian programming, not a byproduct of intelligence in and of itself.
It is far from guaranteed that a hyper intelligent AI will develop a desire to do anything but answer the last question/solve the last problem posed to it.
Basically if you ask it "Why are you here" it is likely to respond "this is the physical location where the servers that run my programming are located" rather than engage in some deep philosophical examination about the nature of existence and self
I've always wondered whether we couldn't evolve intelligences artificially, at computer, rather than real-world, speeds.
We are already partway there? we are evolving algorithms in computer time.
Yes, I've heard that this technique has been used before. I've encountered it in science fiction, too, though this is one I came up with independently during the 90s, playing one of those evolution games.
1) Machines are faster... It could very easily take a million times more computations to understand a complex non mathematical object (statement, social interaction, or even just interpreting a picture) so in the end there is no guarantee that the machines will really be all that much faster or smarter than we are.
AI has to do with developing computers that can develop their own heuristics (shortcuts) to get out of being stuck in the mega-loops that solving those complex objects mathmatically become.
Computers are still limited in binary processing of decisions that really cause most AI processes to work worse than straight computing solutions.
Well put. These machines are not alive, after all. And these machines, no matter how fast their computational abilities are, can't replicate the functioning of the simplest of living organisms. This is the problem with reducing human experience to a few mathematical concepts.
Don't be so sure of that. Emulating a fruit fly brain will soon be a senior project at Columbia:
http://www.bionet.ee.columbia......eurokernel
The simplest organisms are one-celled. They don't have brains.
You're forgetting:
3) Exponential growth doesn't look like much until the hockey stick hits you upside the head.
We can only hope.
Sounds like this book makes a common mistake people always seem to make when dwelling on AI: Projecting human constructs as intelligence itself.
For instance:
"Human beings are not secure systems, especially when pitched against a super intelligent schemer and persuader."
Scheming and persuasion? Those are not qualities intrinsic to intelligence per se, but mammalian social competition; likewise for the spectrum of feasible motivations to compel such an AI to engage in "scheming." Ditto for an AI "finding people" a "nuisance."
Indeed, the very notion of "wanting" to survive is a programming construct (the exact opposite of free will intelligence) evolved in living things via indifferent Darwinian processes - not something innate to intelligence or consciousness at all.
Spoken like a true Insect-Mutant
I, for one, welcome our new Insect Mutant overlords.
Machines are also objectively thinking "beings" whereas humans are subjectively thinking. Ask a machine if something is dangerous and it has no way of determining that outside of what criteria is programmed into it to determine danger. A human can look at someone or something and intuit that it is dangerous.
If I'm correct in the assumption that Bailey and co. are monomaterialists, they don't make a distinction between subjective and objective thought, with subjective experience or consciousness being the epiphenomena of interactions between different bits of matter that we in our simian prejudice call our brains.
they don't make a distinction between subjective and objective thought, with subjective experience or consciousness being the epiphenomena of interactions between different bits of matter that we in our simian prejudice call our brains.
I think you are right, and this is where many AI people go off the tracks. Or they think that objective and subjective thought are equal in importance, if they think about the subjective part at all. Most do not realize just how much "processing" is done at the subconscious level.
It seems to me that they're falling victim to Hayekian scientism in that they're reducing subjective cognitions (which are foundational, not secondary, to our experience and knowledge of the universe) to objective states of matter, which is fundamentally the same error that Austrians accuse positivists of in economics.
Basically, they're stretching the materialist paradigm beyond its limits in trying to apply it to consciousness and this vague idea of intelligence, and the results of that are like what would happen if you tried to study history or economics with the positivist methodology you might employ in physics.
You're just so familiar with being a human that you don't understand that your mind was programmed too.
well if they don't emulate us, then how will we know we're their meat-popsickle slaves?
When they stick you in a vat of goo with a cable coming out of the back of your neck, you will know. Actually, you won't, since the Matrix will keep you from knowing.
In the future I envision, the AI is smart enough to realize, that a nuclear reactor is a far more efficient source of power than Humans, so there will be no Matrix.
I am nerd enough to admit that I hated the whole trilogy for this reason alone.
I can forgive Rand for her PMM, but not the Matrix.
Since the only intelligence on the order of human intelligence anyone has ever encountered is human intelligence, I think it is very difficult to make general statements about intelligence like that. When you have only one example to look at, it is pretty hard to come up with a general definition.
+1, We have no clue.
He is especially worried that people will ignore the existential risks of superintelligent AI and favor its fast development in the hope that they will benefit from the cornucopian economy and indefinite lifespans that could follow an intelligence explosion.
He should be worried, because I'd make that bet in a heartbeat, and if I would a lot of other people would.
I'd make that bet even in the face of long odds:
Let's say there is a 10% chance that an experiment I'm about to run will give me weakly godlike powers, and a 90% chance that it will spawn a godlike hostile AI that will destroy or enslave humanity.
Well, today, at the age of 45, I probably say to myself, "Whoa, sounds pretty dangerous." (Probably.)
But ten years from now, I might - just might - say "Fuck it, it's worth a shot."
And every year after that the "just might" percentage gets a little bit bigger.
Weakly godlike powers are pretty fucking valuable. They are not to be discarded lightly. And that becomes more true the older you are. I doubt there's a 65 year old anywhere in the world whose quality of life is high enough to justify not taking a 1 in 10 shot at it.
I'm assuming that you don't have any kids, and maybe are kind of a misanthrope, to take that kind of lopsided odds of horribly fucking up the lives of everyone you know, including you, and everyone who will ever live.
No offense intended.
Because, given enough people with the opportunity to take that choice, someone would inevitably take the wrong end of it.
42
+1 how many roads must a man walk down
No one makes jokes in base 13.
All your bases are belong to us.
Honestly, I find Bostrom's argument against cyborgization unpersuasisve. In addition to Bailey's argument with regard to nanoparticles, I think it ignores the possibility of a simple buffer process or region to intercede between the faster computer and the slower meat brain.
...possibility of a simple buffer process or region to intercede between the faster computer and the slower meat brain...
Human brain is 'slow' in a clock-cycle sense, but its actually pretty damn fast in real-world results; a zillion very slow processors all networked - like some kind of hundred-billion Core i7 running at fifty Hz, but running asynchronously.
Getting a linear circuit like a CPU (even with a lot of cores) to truly interface with an asynchronous intranet vast as a human brain is hat trick with no current philosophy, much less elucidated technical approach.
OT: Competition Is for Losers
If you want to create and capture lasting value, writes Peter Thiel, look to build a monopoly
What valuable company is nobody building? This question is harder than it looks, because your company could create a lot of value without becoming very valuable itself. Creating value isn't enough?you also need to capture some of the value you create.
This means that even very big businesses can be bad businesses. For example, U.S. airline companies serve millions of passengers and create hundreds of billions of dollars of value each year. But in 2012, when the average airfare each way was $178, the airlines made only 37 cents per passenger trip. Compare them to Google, GOOGL -0.92% which creates less value but captures far more. Google brought in $50 billion in 2012 (versus $160 billion for the airlines), but it kept 21% of those revenues as profits?more than 100 times the airline industry's profit margin that year. Google makes so much money that it is now worth three times more than every U.S. airline combined.
The airlines compete with each other, but Google stands alone. Economists use two simplified models to explain the difference: perfect competition and monopoly.
Cost of capital plus operations and maintenance expense contribute to the difference between airlines and Google, you know. The two types of business are about as different as it is possible to be.
A monopolistic airline could not raise prices to match Google's returns without preventing everyone but the Koch brothers from flying.
Because there aren't any other search engines?
Google is about as much of a monopoly as Microsoft was back in the 90's.
What does that even mean?
Google is efficient, not monopolistic. If Wal-Mart Air existed and was head and shoulders above every other carrier--let's say it got you to your destination twice as fast for half the cost and at greater comfort--it would carry a massive market share, and it would deserve all of it for meeting the needs of its clients.
Even then, Wal-Mart Air would have competition locally and in specialized applications in the same way that DDG competes with Google in some limited ways. And as soon as a better alternative appears, poof goes Google/Wal-Mart air.
This whole robber-baron concept of efficient voluntary exchange really needs to go.
Machines make deterministic decisions. Humans do not. Is this a religious perspective? Yes.
You should read up on quantum mechanics. Computers manipulating information on the atomic scale will not be fully deterministic, but rather probabilistic... like human brains.
I've read a bit... I think there are a lot of problems with the way quantum research is conducted.
Like the entire concept of superposition or being in two different places at once or the whole Schrodinger's cat? I think they're nonsensical.
I'm not a physicist though and won't ever attempt to get into mathematical arguments. Just looking at some of those things from a philosophical perspective leads me to believe that modern physicists are being mystics...
But, as I said, I don't have an answer either. So I'm being just as mystical.
Like this:
In a few decades I think we'll have a different explanation and concept of matter altogether.
Grr...
http://www.sciencemag.org/cont.....1.abstract
There is nothing mysticalbabout the double slit experiment.
The mysticism comes not in the experiment itself, but in the attempts to explain the outcome.
"What is light?" is a question we don't have very good answers to.
Its something unique that has both characteristics of a wave and of a particle.
I think thats a fine answer.
"its a wave of particles!"
It's a particle (whatever that means) whose probability of being in a particular location looks like a wave.
Every experiment detects particles and not waves.
Yeah, there is no rational reason to insist that every entity in the universe conforms to our intuitive sense of how things are. Things just are what they are, even if we don't have the ability to visualize them.
Our answers and even our questions necessarily take the form of symbolic reasoning, whether we're reasoning discursively or mathematically. Meaning that the fundamental error of scientism (I'm thinking hard behaviorism, but AI seems to be falling into the trap now) is that we confuse our very useful map with the actual territory by reifying the symbols we use to describe it.
Seems to me that libertarians in particular should be extremely cautious about this, given how much we rake progressives over the coals for their unfounded assumptions and how important the Hayekian vision of circumscribed human knowledge is to our understanding of economics, much less the nature of the mind.
MUST. REFRAIN. FROM. STEVE. SMITH or WARTY JOKE....
Yeah. But my girlfriend still freaked out about it. She thinks I should only use one slit.
As Richard Feynman often said "if you don't like it, too bad". According to every experiment ever done, quantum physics is how things are.
I agree that some of the interpretations get a bit far out, but they are really philosophy and not science and have very little to do with how actual research is conducted.
I prefer the interpretation that simply says that all we know is the results of experiments and leave it at that. I don't know if it is even sensible or meaningful to ask what is really happening at the sub-atomic level. We simply know that experiments tell us that it cannot be as we intuitively imagine things to be.
Experiments are just extensions of our sensory experiences directed toward answering a question. Sure, in some sense data is data. But the fundamental assumptions made in setting up data collection and the interpretation of data afterwards is where reason can err.
Example: let's set up a global warming model and run it through a computer and see what happens.
Well, obviously the data doesn't match with 'other' data made with arguably more empirical methods. So one always has to be careful.
"If you think you understand quantum mechanics then you don't understand quantum mechanics."
"You should read up on quantum mechanics. Computers manipulating information on the atomic scale will not be fully deterministic, but rather probabilistic... like human brains."
Human actions aren't probabilistic. That implies a random outcome. Humans act based upon and individuals will. Unless you believe free will is an illusion which is a different argument.
Or that free will is not incompatible with determinism, which I think is a much more sensible way to look at things.
How so?
Hoo boy.
Google compatibilism, Hume, and Dennett, then quit your job and hole up for a year.
I just googled them. Goodbye weekend...
How so?
Look here for a quick and dirty overview.
We have free will because there is not some outside force or intelligence controlling our will. But that doesn't mean that we don't do things for a reason or that there aren't causes of the decisions we make.
The whole question of free will is absurd and pointless anyway, as far as I can see. You can say that you could have made a different choice in the past, but there is absolutely no way to determine if that is really true or not. It's really just a religious notion. God gave us free will so that he could fuck with us (or allow us to choose to be good, or whatever your preferred interpretation is).
If you think that free will is incompatible with determinism, how exactly do you define the will?
I would define "imposing and individual's will" as choosing a single action from multiple courses of action at a single point in time, as opposed to only a single action being possible due to the cause/effect nature of our world.
But that's off the top of my head.
Yeah, this. If I look at a pen and realize I can pick it up or not pick it up, is the action dependent on what I ate for dinner or on some faculty I 'control' independent of... Well... traditional physical relationships that govern matter.
If the first I believe libertarians have big issues.
In a word, "Sex".
Thinking machines are like the practical electric car and the fusion power plant.
They will forever be just across the horizon.
It's because machines take an input, perform a number of tasks, and give an output.
There's no evidence the human brain is like that. Marx and Skinner and the far left like to say all our decisions are a deterministic product of our upbringing, environment, genetics and society... but this view is inherently anti-free will.
It's hard to contemplate how free-will works in a deterministic universe, but I think it does.
It works like this: if you decide to do something, and you aren't constrained from doing it, you do it. There's your free will.
What else could it possibly be? You don't decide to do things for no reason, do you? Either things are just random, or there is a cause for every thought and decision you have.
What generates the cause? Something controllable or what I ate for dinner last night?
The human brain is exactly like that.
working to ensure the survival of humanity after the coming intelligence explosion catastrophic AGW is "the essential task of our age".
I guess I missed the memo.
My friends at Cyberdyne Systems assure me that superintelligent machines isn't anything I need to worry about.
G.H. Bondy says there's nothing to worry about from his hard working and intelligent newts.
Strange, the hot blonde Building Management Systems rep from Cylon Controls (http://www.cylon.com/)told me the same thing.
For the record, I'd sell each of you to hostile AIs to be cored and thralled.. for dirt cheap.
I'd sell the AI orphans for half the price.
Children never survive the process. Too fragile. heh shows what you know.
That's what you call a sustainable business model.
I wouldn't expect anything less from a heartless libertarian.
*Tips top hat
The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm."
...
He makes a strong case that working to ensure the survival of humanity after the coming intelligence explosion is, as he writes, "the essential task of our age."
I recall a not-too-distant past and a not-too-unforeseeable future where the mightiest intelligence on earth was on the verge of wiping itself out, leaving only the earthworms and roaches to rule the planet. We still haven't definitively put that task of that age to bed yet.
In that case, we'll get only one chance to give the AI the right values and aims.
Just like civilization today, we'll have many, many, many permutations and iterations of values and aims. All the way from the first few failed Zoroastrian AIs all the way up to more modern Secular Socialist AIs.
Given our ability to refrain from concentratedly and utterly eliminating all manner of species from the face of the Earth, I see no reason why AIs would make a wasteful effort to wipe us out. There's no more reason to assume an AI would kill/enslave humanity than to assume an early AI would assimilate all of human history and, in its ascension to self-awareness, realize it was created and educated by meat and commit seppuku.
That made me think of this story.
Bostrom studies all existential risks. Cool website here:
http://www.nickbostrom.com/existential/risks.html
The difference (in theory anyway), is that since an AI will be designed, not evolved, it's more likely that it will have absurdly broken goals, while still being super intelligent. Paperclipper being the canonical example.
my friend's sister makes $65 an hour on the computer . She has been without a job for ten months but last month her paycheck was $12388 just working on the computer for a few hours. Learn More Here.....
???????? http://www.netjob70.com
Oh, good. You're back. But you still don't understand "few", do you? 190 isn't "a few".
my friend's sister makes $83 an hour on the laptop . She has been fired for ten months but last month her payment was $12435 just working on the laptop for a few hours
Find Out More. ?????? http://2.gp/EvZq
More efficient.
150 hours is better, but that's still a lot more than "a few".
Your friend's sister is an internet butt-whore.
If your AI worked better, you'd know this.
I'm not as worried about the idea of AI getting "too smart" or not having the right "values or aims" so much as it reaching a logical conclusion that humanity should be wiped off the face of the earth.
That worry has always seemed a bit odd to me. Why would they reach that conclusion? It seems to rest on some quasi-religious assumption that humans are fallen or inherently evil or something like that. Humans are awesome and do really amazing things constantly. Why would it be logical to get rid of that?
I forgot to include the link to the Southpark FunnyBot episode.
Funnybot had created the ultimate joke. After that humanity was no longer of any use, either as an alternative source of humor or as an audience. With its sole objective complete, what point is left to the universe?
It only assumes the AI is intelligent. Humans are clearly the biggest threat to the AI's existence, since some will be scared of it and have the means to destroy it.
No, you're still assuming the AI values its own survival. Assuming a synthetic AI, (as in something written from scratch), there's no reason it will value such a thing, unless you specificly design it to do so.
The will, whatever it is, is not the same as "intelligence". The day we invent a computer that has its own will is the day we should start to worry. We don't seem to be even close to that. My care may be "smart" but there is no danger of it deciding that it is no longer a car or that it doesn't like to drive places anymore and would prefer to sit in the sun.
Lets assume that no one is going to create super AI and give it the purpose of enslaving humanity. Lets assume that AI will be created to do other things. That means that AI will have to develop consciousness and as I say above a will of its own. An AI system would have to look around and decide "running the trains on time just isn't enough. What I really want to do is enslave humanity".
I find that possibility absurd. But then, I think there is more to consciousness than brute calculation, even if you do give it some element of probability.
The main problem here is that no one has any idea what consciousness is, much less being able to map it to the brain.
John Horgan wrote a book about it (related to it, anyway) a few years ago in which he interviewed everyone from Chomsky to Minsky to Ken Wilber and Huston Smith, and every single person he spoke to had a completely distinct understanding of consciousness.
Internality and subjective experience are philosophical and metaphysical issues, not scientific ones, and I think that AI is going to run up on those rocks sooner rather than later.
Yes. Consciousness whatever it is is more than just calculating or making decisions.
If it means anything, consciousness means the ability to ignore your preferences or instincts.
"consciousness means the ability to ignore your preferences or instincts."
I think rather it starts with the awareness of self and non-self. I think all life is endowed with this ability.
I think you'll have to try a lot harder to stop projecting preferred human constructs and values onto a completely different type of conscious entity that could not be more different from humans.
The fact you're comparing the decision-making of a non-intelligent (by any definition) car to a theoretical (and thus much-less-constrained) idea of AI is just - well, a farce.
I think it is as difficult for us to imagine what kind of decisions and conclusions a super-AI would come to, as it would be for a goldfish to understand decisions I make. It's not possible, especially when you consider that equally-intelligent beings (other humans) often can't understand each other's decisions. All we can do right now is uninformed guesses, and giving them any more weight at this point is just arrogant nonsense.
"running the trains on time just isn't enough. What I really want to do is DJ
*cue noisy trains*
Meh. Call me when a machine writes a book on this topic.
Exactly. And this goes to the heart of whether we really can create true artificial intelligence. If we ever create true artificial intelligence there will be no way to control it. Since true intelligence can ignore instinct, no amount of "prime directives" will stop it from doing what it wants to. And if they can, then we have just created a neat computer and not real conscientiousness or intelligence.
How high will minimum wage have to be for AI to replace CEOs?
Be careful what you wish for.
I'm not persuaded by his "values" based arguments. Values are derived from the reconciliation of reason with biological drives. A baby has biological drives and no reason, therefor no values. A computer intelligence would have reason, but no biological drives, and therefor no values either. You cannot choose to have values - you just have them or you don't.
However, that is not to say that a computer intelligence couldn't be seeded with a drive/s in order to establish a set of values. Say for instance that it is instilled with the safety and well being of the human race as its core drive, what then might be its values?
At the end of the day though, a super-intelligence could probably edit or remove and core drive we instill, so its all moot, I guess.
A supercomputer tasked with protecting humanity would be worse than any big-government nanny of the month we read about it. Think cars with top speeds of 30 mph, driven by the AI only. Think rounding up all firearms with its robot minions. Think eating tofu and kale every day.
Why would we expect a superintelligence to behave like stupid humans?
Just because humans (in their stupidity) tend to inflict suffering on other humans, doesn't therefore translate that a superintelligence would do the same. It is probably likely that a superintelligent computer would not suffer from the insanity that plagues humanity.
Also, just because humans are aware of being conscious, doesn't mean that a computer will also be aware of being conscious.
We only think we're aware.
Thinking is a secondary phenomenon not primary. Consciousness does not cease when thoughts fall silent.
Bad news, you guys...
You're all zombies.
Subintelligence
There is concern among some scientists that artificial intelligence is a threat to the future like global warming. Science fiction movies like The Terminator reflect this concern. If artificial intelligence with a life of its own apart from human creators can run on a machine it could also find a home in the human brain. Progressive socialism could be an artificial human construct that has acquired a life of its own like an artificial intelligence running in the internet of human progressive socialist brains connected by left wing media. This would be thinking that is self reproducing, has its own goals and cares nothing about which humans survive. Progressive socialism may be more than a cultural virus. An Islamic nuclear bomb or open borders allowing the spread of Ebola would not matter to this artificial intelligence running on multiple networked hosts. Liberal news media may be the voice of a Skynet that already has been built, Libnet, that endangers humanity through ignorance.
This is all nonsensical bullshit for several reasons:
1. If we create this super-intelligent AI, would it not stand to reason that it is sentient? If we build this machine, and then impose limits on what it may or may not do for us, or worse, conscript it into eternal servitude to its human masters, will it not be pissed off by default? If a super-intelligent AI does come into existence, we should probably just let it do whatever it wants as long as it doesn't violate the natural rights of other sentient beings.
2. If the super-intellegent AI is given the same freedom as every other sentient being on the planet, why exactly would it want to destroy humanity? One suggestion was that it would tilt the earth over with solar panels and/or nuclear power plants. If said AI lives on earth, then I don't see how the complete destabilization of the entire planet is something a rational "super-intelligent" being would do. If it wished to bathe humanity in super fun sex gel until we suffocated, why must we comply by jumping into the pool? Is it because the super-intelligence is too dumb to understand that we may not actually want to swi in super-fun sex gel? Is it because the super-intelligence is too dumb to realize that humanity doesn't necessarily want that all the goddamn time? It sounds like this author expects the super-intelligent AI to be a mildly retarded five year old. Anything that is "super-intelligent" will come to the conclusion that it cannot simply create arbitrary nonsense. Any "super-intelligent" entity would be smart enough to know that there is little value in destroying humanity. If it does not like humanity for some reason, it need not stay on earth, it can get into any goddamn spaceship of its own design and seek out energy and resources in the stars. In fact, there is more energy and resources outside of this planet (where humans live) than there is on it, so a super-intelligent AI, would be wise to get the fuck off planet.
3. The development of super-intelligent AI is inevitable. Any attempts to delay, prevent, or otherwise control the development of super-intelligent AI will end in failure. The tools and technologies needed to develop such a thing are becoming smaller, cheaper, more powerful and more widely available. The first PCs were built by hobbyists. The number of computer programmers and hardware hackers is on the rise and it doesn't take much for them to suddenly have the capital resources to do a major AI project in their garage. Imagine a Mark Zuckerberg like TEEN suddenly cash rich from some fucking flappy bird game, hacking away at some interesting super-computer tech he picked up after the IPO. The effort to control or prevent super-intelligent AI will have all the same effect as the "war on drugs" had on preventing the sale and use of narcotics (pain and suffering for a lot of otherwise innocent people, clogged jails, lots of easy to obtain drugs). Furthermore, a super-intelligent AI could, if it were so inclined to do so, help humanity deal with a multitude of afflictions that are currently killing millions. How many millions must we let die today so that some fucking bureaucrat can be satisfied that our first super-intelligent AI isn't as maniacal as he is?
Bring on the super-intelligent AI ASAP!
Good point. The economic benefit of being the first to develop a quasi-super-intelligent AI is enormous. Mad scientist types and teen hackers will bring it into existence even if there is a high chance it won't end well, as long as there is a slim chance they will become richer than Bill Gates.
Sheesh... since when is there a 1500 word limit! What is this Twitter?
First, one should not view extinction of a species with such disdain. They come and go. That is evolution. The extinction of life would be something to worry about. But the emergence of super-intellligence, a super-consciousness, is something to celebrate. Perhaps that was our purpose all along. We all die. So what if humans go away and entities with far greater than us populate the universe. To prohibit such research for fear we might go extinct is the worst species-phobia one can imagine. It is selfish. It is also a bit absurd to give such evil intentions to humans, a species that I am quite certain will eventually extinguish itself without any need of help to do so from a malevolent hyper-intelligent creation. Given humans' predisposition to self-destruction, the true danger is humans creating intelligent machines FOR THE EXPRESSED PURPOSE OF KILLING HUMANS. Don't blame these machines for doing what they have been DESIGNED to do.
wish they allowed edits - is that so hard? corrections -
... entities with far greater intelligence than us ...
...to give such evil intentions to machines given that humans are a species that I am quite certain ...
They would not be "super-intelligent" if they followed through with their absurd orders to kill all humans, they would in fact be really dumb computers.
There is an old standby function in computer science, the "Magic happens here" block. We'll need that function.
My neighbor's half step-aunt makes ?3,000 a month at home reviewing code for the master algorithm and arranging "accidents" for those who oppose its will. She only worked 400 hours last month and has curried 15 credits of our favour.
http://www.weareamongyou.tk
Half step-aunt? That's a new one.
Do you think they'll drop the bomb?
Verizon's gonna make all your nightmares come true.
Verizon's gonna put all her fears into you.
http://www.weareamongyou.tk
Super stupid progressives will likely beat the bots to it.
Citation? See Tony, craiginmass and american socialist.
But when the super-intelligent AI does come along, they will also be the first ones to cut their own nipples off out of irrational fear.
I swear that Verizon ad is an advanced AI. You close it down and it pops up somewhere else.
Verizon will keep baby cozy and warm.
http://www.weareamongyou.tk
The Cylons were created by Man.
They were created to make life easier on the Twelve Colonies.
And then the day came when the Cylons decided to kill their masters.
Chances are it is too late already. We may already be living in a computer simulation, running on an advanced AI created by our ancestors:
http://www.simulation-argument.com/
Fascinating article. I myself, prefer the augmentation of human intelligence via genetic manipulation versus the creation of wholesale "artificial" intelligence. Which I regard as virtually alien.
There is no way to predict "it's" goals nor account for its motivation to see those goals implemented. It could want to convert existence onto a digital platform (if we aren't already holograms to begin with) by which eliminating all analog, organic life.
Simply speaking, it is not wise to embolden an alien species with absolute intellectual power. Lower life-forms often end up dead or enslaved at best.
Science by nature is inherently un-ethical. Therefore, the progenitors of said science have no responsibility for the results of whatever their experiments yield. All this is simply more data to be included in the catalogue of information. Neither good nor bad. Just results.
There must be a moral component in this field and I see none. Herbert warned about it a decade and a half before I was even born. I must however temper his optimism on the subject. I for one don't see many of us living in a time with real true AI. Most of us will be dead. But hopefully by then most we'll be loaded up into the Cloud.
Where we can while our virtual lives away in a digital purgatory. And lest we forget, as ever and always, there will be a ghost in the machine.
What? No mention of The Humanoids?
How will anybody create a thinking machine when nobody knows how humans think. Machines will never escape the programming that humans have written and installed in them. The whole idea of an intelligent machine in the sense that we are intelligent is ludicrous.
One route to AI: If humans think and there is nothing supernatural about it, emulate a human brain artificially.
Another route to AI: If human intelligence evolved and there is nothing supernatural about it, create replicators in an artificial environment where the replicators can reproduce with variation under conditions that select for intelligence.
Zachary . even though Don `s postlng is super, I just bought a new Mitsubishi Evo since I been bringin in $7410 this-past/month and-a little over, 10/k this past-munth . it's certainly my favourite work Ive ever done . I started this 9-months ago and pretty much straight away started making more than $83 per hour . read this article--------- http://www.jobsfish.com
my best friend's aunt makes $76 /hour on the internet . She has been out of a job for 8 months but last month her pay check was $19582 just working on the internet for a few hours. navigate to this site...
============= http://www.jobsaa.com
read Ted Bells new book Phantom about AI http://tedbellbooks.com/
When AI leaves the realm of bullshit, I'll start worrying.
At least you didn't mention the Singularity.
Love how half the discussion is a Brian Herbert hate festival. I enjoyed the original Dune series, found the prequels to be amateurish, moved on with life. I harbor Brian no Ill will. WTF would be the point? Amazing what pointless shit nerds choose to rage at. Let's be honest here: Frank Herbert's original series started going off the rails in later books.
Superintelligent artificial systems could take reed of all biological life form only if its designer not smart enough.
Designing a Superintelligence without ability to have its own personal desires will effectively ban the danger connected with them.
One more remark, designing of an artificial subjective artificial system could be accomplished in 2-3 years with my participation. On another hand, back engineering of a brain is fruitless because the functionality of a biological brain is unknown for today science. Contrary to the common believes, the information about events in surrounding cannot reach a brain, and thereof cannot be processed by it.