The End of Humanity: Nukes, Nanotech, or God-Like Artificial Intelligences?
Closing dispatch from the Oxford Catastrophic Risks Conference
Oxford, England—The Global Catastrophic Risks conference sponsored by the Future of Humanity Institute concluded on Sunday. Participants were treated to a series of presentations describing how billions of people could potentially bite the dust over the next century. The possible megadeath tolls of both natural and biotech pandemics were considered. The chances that asteroids, comets, or gamma ray bursts from a nearby supernova could wipe out humanity were calculated. The old neo-Malthusian threats of overpopulation, resource depletion, and famine were trotted out. But these risks to future human well-being paled in comparison to one main menace—malicious human ingenuity.
Human ingenuity forged the still massive arsenals of nuclear weapons held by the United States and Russia. And as the conference participants made an argument that human ingenuity is on track to craft nanotech fabricators that can make essentially any product, including weapons of mass destruction, at essentially no cost, not to mention a self-improving artificial intelligence possessing god-like powers to pursue its own goals.
First, let's consider the nuclear threat. Joseph Cirincione of the Ploughshares Fund pointed out the good news that the world's nuclear arsenals have been cut in half-down from 65,000 to 26,000 since the height of the Cold War. However, the U.S. retains 10,685 nuclear bombs and Russia is estimated to have around 14,000. Of those, 4,275 in the U.S. and 5,192 in Russia are active. Both countries maintain 2,000 weapons on hair-trigger alert, ready for launching in 15 minutes or so. Cirincione offered a couple of scary scenarios, including one in which there is an unauthorized launch of all 12 missiles from a Russian submarine containing 48 warheads with about 5 megatons total destructive power. Such an attack would kill 7 million Americans immediately. A retaliatory American attack aimed at several hundred Russian military assets would kill between 8 and 12 million Russians.
With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian "football" in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.
A global nuclear war scenario in which most of both Russian and American arsenals were fired off would result in 105 to 230 million immediate American deaths and 28 to 56 million immediate Russian deaths. One of the effects of such an attack would be a rapid cooling of global temperatures as sunlight was blocked by dust and smoke. Cirincione argued that even a nuclear war limited just to bitter enemies India and Pakistan could produce enough dust and smoke to lower global temperatures by one half to two degrees Celsius, plunging the world back to the Little Ice Age.
The good news is that Cirincione sees an opening for negotiations to totally eliminate nuclear weapons. He pointed to an initiative by the "Four Horsemen of the Un-Apocalypse"; that is, by former Secretaries of State Henry Kissinger and George Schultz, former Sen. Sam Nunn (D-Ga.), and former Secretary of Defense William Perry that aim to eliminate nuclear weapons completely. In fact, both of the presumptive major party presidential candidates, Sen. John McCain (R-Ariz.) and Sen. Barack Obama (D-Ill.), have explicitly endorsed the idea of global nuclear disarmament. Cirincione argued that a commitment by the declared nuclear powers would have the effect of persuading countries like Iran that they did not need to become nuclear powers themselves.
Cirincione danced around the question of what to about Iran's pursuit of nuclear weapons, pointing out that its nuclear facilities are hardened, dispersed, and defended. Cirincione asserted that the U.S. has 5-day and 10-day plans for taking out Iran's nuclear facilities, but he noted that such plans don't end the matter. Iran has moves too, including trying to block oil shipments through the Straits of Hormuz, revving up terrorist attacks in Iraq, and even aiding terrorist attacks in the U.S. Cirincione claimed that that the Iranians are still five to ten years away from making a nuclear bomb. On a side note, Cirincione admitted that he initially did not believe that the Syrians had constructed a nuclear weapons facility, but is now convinced that they did. The Syrians hid it away in a desert gully, disguising it as an ancient Byzantine building.
Terrorism expert Gary Ackerman from the University of Maryland and William Potter from the Monterey Institute of International Studies evaluated the risks from two types of nuclear terrorism—the theft of nuclear material and the construction of a crude bomb and the theft of an intact nuclear weapon. They set aside two lower consequence attacks: the dispersal of radiological material by means of a conventional explosion and sabotage of nuclear facilities. Could non-state actors, a.k.a., a terrorist group, actually build a nuclear bomb? Potter cited an article by Peter Zimmerman in which he estimated that a team of 19 terrorists (the same number that pulled off the September 11 atrocities) could build such a bomb for around $6 million. Their most challenging task would be to acquire 40 kilograms of highly enriched uranium (HEU). There are 1700 tons of HEU in the world, including 50 tons stored at civilian sites. Potter acknowledged that intact weapons are probably more secure than fissile material.
Ackerman noted that only a small subset of terrorists has the motivation to use nuclear terrorism. "So far as we know only Jihadists want these weapons," said Ackerman. Specifically, Al Qaeda has made ten different efforts to get hold of fissile material. Ackerman told me that Al Qaeda had been defrauded several times by would-be vendors of nuclear materials. Just before the September 11 atrocities, two Pakistani nuclear experts visited Osama bin Laden in Afghanistan, apparently to advise Al Qaeda on nuclear matters. One possibility is that if Pakistan becomes more unstable intact weapons could fall into terrorist hands. Still, the good news is that intercepted fissile material smugglers have actually been carrying very small amounts. Less reassuringly, Potter did note that prison sentences for smugglers dealing in weapons grade nuclear material have been less than those meted out for drunk driving.
One cautionary case: Two groups invaded and seized the control room of the Pelindaba nuclear facility in South Africa in November, 2007. They were briefly arrested and then released without further consequence. Both Ackerman and Potter agreed that it is in no state's interest to supply terrorists with nuclear bombs or fissile material. It could be easily traced back to them and they would suffer the consequences. Ackerman cited one expert estimate that there is a 50 percent chance of a nuclear terrorist attack in the next ten years.
While nuclear war and nuclear terrorism would be catastrophic, the presenters acknowledged that neither constituted existential risks; that is, a risk that they could cause the extinction of humanity. But the next two risks, self-improving artificial intelligence and nanotechnology, would.
The artificial intelligence explosion?
Singularity Institute for Artificial Intelligence research fellow Eliezer Yudkowsky began his presentation with a diagram of the space of possible minds. Among the vast space of possible minds, a small dot represented human minds. His point is that two artificial intelligences (AIs) could be far more different from one another than we are from chimpanzees. Yudkowsky then described the relatively slow processing speeds of human brains, the difficulty in reprogramming ourselves, and other limitations. An AI could run 1 million times faster, meaning that it could get a year's worth of thinking done in 31 seconds. An "intelligence explosion" would result because an AI would have access to its source code and could rapidly modify and optimize itself. It would be hard to make an AI that didn't want to improve itself in order to better achieve its goals.
Can an intelligence explosion be avoided? No. A unique feature of AI is that it can be a "global catastrophic opportunity." Success in creating a friendly AI would give humanity access to vast intelligence that could be used to mitigate other risks. But picking a friendly AI out of the space of all possible minds is a hard and unsolved problem. According to Yudkowsky, the unique features of a superintelligent AI as a global catastrophic risk are: There is no final battle, or an unfriendly AI just kills off humanity. And there is nowhere to hide because the AI can find you wherever you are. There is no learning curve since we get only one chance to produce a friendly AI. But will it happen? Yudkowsky pointed out that there is no way to control the proliferation of "raw materials," e.g., computers, so the creation of an AI is essentially inevitable. In fact, Yudkowsky believes that current computers are sufficient to instantiate an AI, but researchers just don't know how to do it yet.
What can we do? "You cannot throw money or regulations at this problem for an easy solution," insisted Yudkowsky. His chief (and somewhat self-serving) recommendation is to support a lot of mathematical research on how to create a friendly AI. Of course, former Sun Microsystems chief scientist Bill Joy proposed another solution: relinquishment. That is, humanity has to agree to some kind of compact to never try to build an AI. "Success mitigates lots of risks," said Yudkowsky. "Failure kills you immediately." As a side note, bioethicist James Hughes, head of the Institute for Ethics and Emerging Technologies, mused about how much longer it would be before we would see Sarah Connor Brigades gunning down AI researchers to prevent the Terminator future. (Note to self: perhaps reconsider covering future Singularity Institute conferences.)
The menace of molecular manufacturing?
Next up was Michael Treder and Chris Phoenix from the Center for Responsible Nanotechnology. They cannily opened with a series of quotations claiming that science will never be able to solve this or that problem. Two of my favorites were: "Pasteur's theory of germs is a ridiculous fiction" by Pierre Pachet in 1872, and "Space travel is utter bilge," by Astronomer Royal Richard Woolley in 1956. Of course, the point is that arguments that molecular manufacturing is impossible are likely to suffer the same predictive failures. Their vision of molecular manufacturing involves using trillions of very small machines to make something larger. They envision desktop nanofactories into which people feed simple raw inputs and get out nearly any product they desire. The proliferation of such nanofactories would end scarcity forever. "We can't expect to have only positive outcomes without mitigating negative outcomes," cautioned Treder.
What kind of negative outcomes? Nanofactories could produce not only hugely beneficial products such as water filters, solar cells, and houses, but also weapons of any sort. Such nanofabricated weapons would be vastly more powerful than today's. Since these weapons are so powerful, there is a strong incentive for a first strike. In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. "Nanotechnology could lead to some form of world dictatorship," said Treder. "There is a global catastrophic risk that we could all be enslaved."
On the other hand, individuals with access to nanofactories could wield great destructive power. Phoenix and Treder's chief advice is more research into how to handle nanotech when it becomes a reality in the next couple of decades. In particular, Phoenix thinks that it's urgent to study whether offense or defense would be the best response. To Phoenix, offense looks a lot easier—there are a lot more ways to destroy things than to defend them. If that's true, we should narrow our future policy options.
This concluion left me musing on British historian Arnold Toynbee's observation: "The human race's prospects of survival were considerably better when we were defenseless against tigers than they are today when we have become defenseless against ourselves." I don't think that's right, but it's worth thinking about.
Ronald Bailey is reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.
Disclosure: The Future of Humanity Institute is covering my travel expenses for the conference; no restrictions or conditions were placed on my reporting.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
In his final dispatch from the Oxford Catastrophic Risks Conference
Cue ominous music.
Duuuuh dum.
Duuuuuuuuuuuuh dum.
Dum dum dum dum dum dum dum dum
But all of these issues have been so well documented by Hollywood, what could they possibly uncover now?
BTW, if you people loved human survival as much as me you would be driving a hybrid* too.
*Burns both gas and rubber 🙂
My car runs on soylent biodiesel.
TD,
When I get done with the motor and fuel system, mine will run on that fancy undrinkable alcohol in adition to other organic hydrocarbons, but I doubt I would run that stuff more than once a quarter.
Now, when I get around to building up a hauler truck, I am with you on the all-source diesel, but I will be leaning towards train oil, only soylent biodiesel in a pinch 😉
"My car runs on soylent biodiesel."
My car runs on soylent green.
John,
Do you just use the wafers in a boiler to heat the steam or do you have some other system going?
Man, I like making fun of hippies just as much as the rest of you, but the compressed air car is a cool idea. Probably not very safe structurally, but still ingenious.
Guy,
The wafers actually can be ground into a liquidifyd past and burn like diesel fuel.
Do you just use the wafers in a boiler to heat the steam or do you have some other system going?
A person truly concerned with saving the Earth would just drain the blood of the capitalists directly into her/his eco-vespa instead of processing his/her fuel.
John, why do you hate the Earth? She is mother to us all...
John,
Cool. For a moment I thought you were joking and was about to inform you that TD and I are serious.
Never mind.
SF,
That is still something to laugh at.
The air car looks pretty cool. I saw an electric dirt bike on the net the other day. That sounds really cool being able to ride through the countryside with virtually no engine noise.
Guy,
I was joking. Apparently Sugarfree is the only one who got it. Google Soylent Green. I know you guys are serious. I was just being a smart ass.
John,
Sounds like something great for Cavalry Scouts. Last round of bikes for them were Yamaha IIRC.
"The humans fear what they don't understand."
So replicators would be bad because they would make people unemployed; I don't know about you all, but I work to pay for stuff, food and shelter. I grant that there would be social disruption, but I am willing to chance it. And I say that as an employee of a freight forwarder facilitating the movement of thousands of containers a year from the far east to North America.
As for the AI, does emotion (like hate) necessarily follow from intelligence? We shouldn't make an AI because it could kill us; well, any baby could become a vicious serial killer, mass murderer or elected official, but I don't usually see that argument used against having kids.
In the Animatrix, the AIs tried to give the humans all the material goods they could use, but the politicians, fearing the loss of power from people not needing them, manipulated humanity into starting the war. The AIs were pro-human until humanity started a genocidal (luddicidal?) war against them.
IIRC, some people have proposed introducing baby AIs into Second Life like environments so they can get used to people and people can get used to them. My only worry is the kind of scumbags who torture pets, people etc. getting to them before they realize most such scumbags are the minority.
John,
We are big on the soylent green already! You never noticed?
"Sounds like something great for Cavalry Scouts. Last round of bikes for them were Yamaha IIRC."
Those guys have all the fun. I would love to turned lose on some big ass manuever range with a license to tear ass around as I pleased. I never thought of it that way but they would be good for scouts. You could just keep a diesel generator in the rear to recharge them. The only problem would be that it might take a while to recharge them.
the compressed air car is a cool idea.
"Why no, it's not designed to haul more than 2 people nor any cargo at all. Crash worthy? No, it crumples like a thin sheet of tin foil in a stiff wind. What? It'll go for about 80 miles on a test track. Why do you ask?"
"John,
We are big on the soylent green already! You never noticed?"
Yeah. I think this Hit and Run thread number 500 that has worked in a Soylent Green reference. Once Tall Dave said "soylent bioldiesel" you knew the Soylent Green reference could not be far behind.
"Why no, it's not designed to haul more than 2 people nor any cargo at all."
Neither is a motorcycle and those are still cool.
"Crash worthy? No, it crumples like a thin sheet of tin foil in a stiff wind."
Pansy.
Still waiting for the car that runs on a huge, tightly wound spring. This will require an infrastructure of winding stations, manned by giants. I dare to dream big.
So, nuclear winter is the only workable solution to man-made global warming.
Now that's irony.
"Still waiting for the car that runs on a huge, tightly wound spring. This will require an infrastructure of winding stations, manned by giants. I dare to dream big."
You don't need a giant, you just need a big key that gives you some leverage.
Leave JW alone. He's just still smarting over how hard I pwn'd him on the Dark Knight thread.
You could just keep a diesel generator in the rear to recharge them. The only problem would be that it might take a while to recharge them.
Spare battery modules in docking stations in the MRAP or other vehicle.
Those new cool Marine Diesel/Electric vehicles have plenty of charging capacity too, but I think they might be running up the cost of TallDave's fuel.
Why no, it's not designed to haul more than 2 people nor any cargo at all.
It seats six; it could easily seat four with cargo room. RTFA.
Crash worthy? No, it crumples like a thin sheet of tin foil in a stiff wind.
I believe I stipulated that it might not be safe. It is glued together, after all.
It's interesting that the two fields that - at this moment - are 100% science fiction are the ones that everyone is the most scared about.
I, for one, am sick and tired of the continuing attention the AI singularity crowd gets. First of all, their methods for "predicting" the imminent date of the singularity are laughable - and have been wrong for decades.
Secondly, the level of unfounded speculation and ranting on the topic approaches that you would expect from some doomsday cult holed up in a desert bunker.
Yet they continue to be taken seriously. Amazing.
Is it theoretically possible to create AI? Sure, why not. But that would be one of the major engineering accomplishments of human history - it's not something that's going to pop into existence one day on someone's hard drive like a technological immaculate conception. It won't appear like a thief in the night any more than my copy of Office is going to recode itself tomorrow night and grab the launch codes from NORAD.
The other problem that the singularity folks have is mistaking intelligence for power. No one would argue that Stephen Hawking, for example, is one of the most intelligent people that's ever lived. But how much power does he wield?
Let's wait until we make real progress - or even a tiny step in the direction of real progress - and understand how this shakes out in the *real world* before we start getting all in a lather. Any discussion at this point has about as much validity as a heated argument over the relative merits of Romulan vs. Kligon cloaking technology.
Guy,
A HMMWV has a huge diesel engine. You could easily charge them off of a idling HMMWV.
It seats six; it could easily seat four with cargo room. RTFA.
How many in the ashtray?
I believe I stipulated that it might not be safe. It is glued together, after all.
Plenty of cars are glued together today! They need to get with the 3M corporate overlords for the proper adhesive.
Hey now, it wasn't *that* bad. I forget one little fucking detail....grumblegrumble
John--Motorcycles are always cool and always will be. Smart Cars/Air cars will never be cool, even if giant coolness-sucking zombie aliens invade the Earth and destroy all that is cool.
And I will be right behind our own native badass, Nick Gillespie, who will be leading the charge against these soulless demons, riding our bikes straight into the mouth of certain doom. Yeah, you heard me. Right fucking at them. I'll see you in Hell, pal.
A HMMWV has a huge diesel engine. You could easily charge them off of a idling HMMWV.
Maybe, but they will be getting hauled around in something bigger. Might need to add charging capacity to the HMMWV too.
"Any discussion at this point has about as much validity as a heated argument over the relative merits of Romulan vs. Kligon cloaking technology."
Everyone knows that debate was settled years ago in the Romulan's favor.
What about the dangers of turning SETI active and sending out signals to let whatever is out there, if anything, know we are here? I know that sounds far out, but we seriously have no idea what if any life is out there. The aging hippies at SETI are convinced that any intelligence in the universe must be some kind of benevolent age of aquiarius types and could never mean us harm. Where the hell is HP Lovecraft when you need him.
Air cars will never be cool
Wherever the air comes out will probably be a cool spot. Can probably chill a Foster's or two there.
SugasrFree--I'm not hatin' on you, but these "alternative" cars always look reeeealy, really nice, until the real world smacks them in the face.
Give me a car that runs on air and does *everything* my conventional car does as well, and I'll be a convert. 'Til then, it's just the engineers talking smack about how great their shit is in the lab.
JW,
I will give you that. An air car is about as cool as a segway. But still it is kind of a neat idea.
What about the dangers of turning SETI active and sending out signals to let whatever is out there, if anything, know we are here?
We have been doing that since before Heinrich Hertz. Didn't you see Contact?
BTW, I wonder if Hertz rents air cars?
Anon,
Leaving aside the actual science, the singularity hypothesis is not about an AI arriving ex nihilo onto to someone's hard drive, but rather an existing man-made AI "realizing" that it can make itself "smarter" and devoting it's now higher intelligence to figuring out ways to make itself even smarter until it begins to consume the entire computational output of all the resources it can link itself to (including human minds and forced conversion of matter (maybe all matter) into a computational medium.)
While this is a science fiction scenario, it's not the silliest idea anyone's ever come up with. It's just silly compared to real threats like large impactors and Miley Cyrus.
Oh, and if anybody does not believe that the exteraterrestrials can not detect the experiments of Heinrich Hertz, then you are just not being serious enough of this discussion to matter.
I say one nice thing about an air car...
You guys only want the cynical, mean SugarFree. You won't let me have LAYERS!
"We have been doing that since before Heinrich Hertz. Didn't you see Contact?"
ACtually no. That is a popular myth. Astonmers have figured out in the last few years that ordinary radio and TV waves fade to the point of blending in with background radiation within a couple of light years. So there is no alien out there watching I Love Lucy reruns 50 light years away. To go really far, like 100s or even 1000s of light years, you need a certain kind of concentraited and really powerful beam. That is what the SETI folks are trying to send out.
"Even if something menacing and terrible lurks out there among the stars, Zaitsev and others argue that regulating our transmissions could be pointless because, technically, we've already blown our cover. A sphere of omnidirectional broadband signals has been spreading out from Earth at the speed of light since the advent of radio over a century ago. So isn't it too late? That depends on the sensitivity of alien radio detectors, if they exist at all. Our television signals are diffuse and not targeted at any star system. It would take a truly huge antenna-larger than anything we've built or plan to build--to notice them."
http://www.seedmagazine.com/news/2007/12/who_speaks_for_earth.php?page=all&p=y
SF,
I suggest you view Colossus: The Forbin Project and 2001: A Space Odyssey before getting all silly on us again. Remember, the real future prediction movies begin with one word, then a colon, in that order.
John,
ACtually no. That is a popular myth. Astonmers have figured out in the last few years that ordinary radio and TV waves fade to the point of blending in with background radiation within a couple of light years.
Please follow along. We are talking super-brainey, engineering G_d-like exteraterrestrials, not the ones just like us who destroy everything and have wars!
Bah! Zaitsev is just a hack for Big Invasion.
It's just silly compared to real threats like large impactors and Miley Cyrus.
Dear god, no! Not the Large Cyrus Collider! Everything will reset to hillbilly time!
Every 8 seconds, a child dies somewhere because of bad water and/or lack of sanitation.
Maybe the catastrophists' time would be better spent trying to alleviate some of this.
"Please follow along. We are talking super-brainey, engineering G_d-like exteraterrestrials, not the ones just like us who destroy everything and have wars!"
Maybe we have gotten lucky and they have missed our signals. Everyone else, if there is anyone has. Why tempt fate and send more?
Everything will reset to hillbilly time!
But, but... I like my shoes and hygiene! NNNOOOOOOOOOOOO!
So, nuclear winter is the only workable solution to man-made global warming.
Would this be more or less ironic than man-made global warming as a potential solution to unintentional nuclear winter? I'm going to tentatively say less ironic.
In my mind, the whole "Nukes can solve all of our problems!" trope was taken to its limit when Dyson came up with the idea of using them to cheaply get into space (close 2nd, the Russians using them to reroute rivers).
"Pasteur's theory of germs is a ridiculous fiction" by Pierre Pachet in 1872,
this isn't a "predictive failure," (what is it predicting?) this guy just turned out to be wrong
and "Space travel is utter bilge," by Astronomer Royal Richard Woolley in 1956.
I was curious about the context of this so I googled it and:
Gruffed Woolley, in response to reporters' questions about the prospects for interplanetary travel: "It's utter bilge. I don't think anybody will ever put up enough money to do such a thing . . . What good would it do us? If we spent the same amount of money on preparing first-class astronomical equipment we would learn much more about the uni verse . . . It is all rather rot."
http://www.time.com/time/magazine/article/0,9171,861825,00.html
first of all, he's not saying space travel is impossible, like people say about "molecular manufacturing"-style nanotechnology and "runaway singularity"-style ai (they're probably right about both). second, he was basically right, a lot of space travel is a huge waste of money science-wise
the fact that reason shrugs off global warming but sends people to conferences to worry about grey goo or killer robots...what can you even say?
Well, in Reason's defense - as a proud subscriber - that's not *all* they spoke about at the conference.
I'm one of the guys who was scoffing at the singularity crowd - but I think that the conference program, taken as a whole, is fascinating, and well worth a visit from Reason:
http://www.global-catastrophic-risks.com/programme.html
the fact that reason shrugs off global warming but sends people to conferences to worry about grey goo or killer robots...what can you even say?
Ron has gone to global warming conferences as well. Plus this recent conference covered both global warming and grey goo.
Oh Lawd, please rapture us good Christians up immediately - leaving the secular liberals to suffer for eternity according to the rules laid down by the holy authors of 'Left Behind'.
yeah it's kind of a cheap shot--my real point is that these people's "they scoffed at galileo!" rhetoric is bullshit. i did read this, so i can't really claim this stuff isn't interesting or fun to think about
But, but... I like my shoes and hygiene! NNNOOOOOOOOOOOO!
I, for one, welcome our new toofless overlords.
One cautionary case: Two groups invaded and seized the control room of the Pelindaba nuclear facility in South Africa in November, 2007.
Except the control room ain't where the nuclear material is. Us people who habituate nuclear control rooms tend to like to have the material as far away from us a possible. And we put a lot of other stuff in between that makes the nuclear material hard to get to.
Now, if you just want to blow something up on site, I suppose this method will work. But I don't see it as a very practical case study if one is trying to learn how to get nuclear material for later use.
I thought the world was going to be eaten by a black hole created by a particle accelerator experiment?!?!?!?
Yeah, Ed ... I would have thought that was in the Top 3, at least.
And "Anon | July 22, 2008, 4:14pm", the purported issue with the out-of-control AI (is there any other kind?) is that IMMEDIATELY upon the creation of such a beast, humanity would discover if it was benevolent (i.e. we all live) or if it was evil (i.e. we all die). Therefore, they suggested grappling with the methods for attempting to guarantee benevolence PRIOR to any attempts that might come close to working. Ya gotta think about these things BEFORE they are possible, because if you're wrong ... you die too quickly to fix your "error".
The Large Cyrus Collider could be a boon on this front by retarding everyone in a nanosecond.
Cirincione argued that even a nuclear war limited just to bitter enemies India and Pakistan could produce enough dust and smoke to lower global temperatures by one half to two degrees Celsius, plunging the world back to the Little Ice Age.
I saw this touched on here in the comments, but not with enough incredulity. We are only .5 to 2 degrees globally from another Little Ice Age? Why are we concerned about a little global warming again?
What's the argument for the impossibility of nanotechnology?
Has anyone thought of running some tests on the cracker to see if it is cracker or Jesus? Obviously we already know the outcome, but we can laugh all the harder at the idea of transubstantiation if we prove that the cracker is still 100% cracker.
Actually, even better, you could get a non-transubstantiated cracker and test the difference between them.
Maybe we have gotten lucky and they have missed our signals. Everyone else, if there is anyone has. Why tempt fate and send more?
Now you are getting it. Instead of shutting down all of our organic fuels, we need to shut down all communications to save the world.
John-David | July 22, 2008, 9:36pm,
I saw this touched on here in the comments, but not with enough incredulity. We are only .5 to 2 degrees globally from another Little Ice Age? Why are we concerned about a little global warming again?
Excuse me for jumping in, but the 'great' Dr. Carl Seagsn (he had TV shows and stuff) predicted that we would have a mini-Nuclear Winter if Saddam set the Kuwaiti oil wells alight. Guess what? After a volcano eruped near the same time, which should be ignored, we DID have a global temp drop!
We are just on the edge of another global disaster, just ask Al Gore, John Kerry or Senator Obama. We can not afford, as a people, to ignore an opportunity to save the planet. So, vote more taxes and the government will fix it. Promise. Union wages guaranteed.
Why am I a humanist? It isn't an instinctive love of mankind. Indeed my disgust with the folly and degradation of our age precludes anything but a cold blankness in my feelings toward my species. No, it is evolution. Consider: our species is about 100,000 years old. Only the last 6% or so of that is civilized, and only the last 6% or so of that includes modern science. At least another few centuries seems only fair. But if that means another 3, 4 or even 6 billion people, then forget it. Ten billion humans cannot live humanly on this planet. There just isn't enough arable land, potable water or breathable air to make that happen.
Ed: Look for a separate column on the dangers of man-made black holes and strangelets.
Why hate on the strangelets? Just because they are different?
Read someplace recently that "black hole" is a racist term now. Can't think of a new name for it without stepping on the weight challenged or the dense.
Read someplace recently that "black hole" is a racist term now.
I'm colorblind to infinitely dense points; I don't care if they are black, white, purple, or polka dot.
Read someplace recently that "black hole" is a racist term now.
Wouldn't that make it a singularity of no color?
Check out this site:
http://www.skynetrobotics.com
The answer is quite simple, post singularity "humans" will come back after leaving earth for 40 years, having forgotten and mythologized their origins, and will exterminate every human they can find.
anon,
"First of all, their methods for "predicting" the imminent date of the singularity are laughable - and have been wrong for decades."
Really, I thought the earliest anyone predicted the singularity was 2010 and that was if a large well funded group essentially put all of their resources into creating an AI. Even Kurzweil puts his money on 2029.
"The other problem that the singularity folks have is mistaking intelligence for power."
They talk about self improving intelligence. It's a bit different.
"Let's wait until we make real progress - or even a tiny step in the direction of real progress - and understand how this shakes out in the *real world* before we start getting all in a lather."
What would that progress look like? If something like the singularity were to happen it would be the biggest thing to ever happen to the human race. Even if there is only a small chance how could you not get excited speculating about it?
"Any discussion at this point has about as much validity as a heated argument over the relative merits of Romulan vs. Kligon cloaking technology."
Couldn't you say the same about the naysayers?
Bagehot: "Why am I a humanist? It isn't an instinctive love of mankind."(condensed)
Bagehot you seem to be the most serious poster here. Are you saying that you are willing to set the example and lighten the global environmental human load by 1 (one) person? If so, be my guest.
I'm not fully convinced that the development of general AI is unavoidable. Intelligence and thus the development of AI are closely linked to available processing power (e.g. well explained by Hans Moravec in some of his articles). Sufficient processing power is a prerequisite to build it and the more available processing power the easier building it probably will be.
Somehow regulating the available processing power could be a possible strategy to prevent or at least postpone this scenario, that is trying to "slow down" moore's law. This could act as a barrier towards the development of AI, in making it too difficult to achieve for someone trying to build it.
To do this one could do the following:
1. Don't give patents on things that are intended or have the potential to increase calculating power. This would reduce the financial impetus for applied research in this field. (the patent system is also a well working international system today)
2. Find a way to regulate the microprocessor industry, regulate max calculation power, and in some way regulate the production/sales of processors.
3. Find a way to regulate the building of supercomputers, perhaps by some UN organisation similar to IAEA (I think it's still a bit hard to build the fastest supercomputers in secrecy today)
4. Find a way to partly or totally compensate for losses for investment been done in research/plants for increased calc. power that due to regulations will not be utilized.
5. Try to enlighten the public and politicians about the risks/possible dangers and the challenges and choices/dilemma the emerging technology will bring. (I think these issues where the ones top politicians in the world should discuss today.)
6. Bring the awareness of sci/tech dangers into consideration when giving prices like Nobel prices etc.
7. Try to regulate public founding of research, so it can be led away from areas somehow defined as risky.
8. Try to work for over national / international regulations on this kind of research.
9. Give support for researchers that will change their career from fields somehow defined as (potentially) dangerous, e.g. nanotechnology
In a longer term a strategy to reduce / "kill off" research and accumulation of knowledge into areas that can give alternative ways to faster computing should be developed.
Doing this would be difficult, and the suggestions are not optimal solutions, but in my opinion this is probably the only alternative.
Although Kurzweil in his book "the singularity is near" make a good case for the singularity to happen soon, I think his optimism for humanity with regard to this is unfounded.
As long as the humanity is at risk of being wiped out, then at least in a risk calculation perspective you could put up quite some effort and resources trying to avoid this scenario.
Money = a symbol of other peoples labor.
Nanotech + AI = no need for other peoples labor and hence obsolecence of money.
With no need for money to pay for other peoples labor (physical or intelectual) which is likewise useless the majority of population has outlived it's usefullness to those in control of Nanotech and AI, ie corporations and governments. So, I ask, then what?
What do the powers that be, now near omnipotent with their technological genies do with those of us who have no genies, and are of no use to them? Why would they share said genies? Money no longer exists, so how would one pay for these services? Those who control the technology can have what ever they want, when ever they want and have no need to pay you or me to make, get or create it for them. If History is an indicator of things to come the future shall be a utopia for the few who control the technologies and the rest of us will be extinct or at best ignored!