Faster Than A Speeding Nanobot
Below, two cool graphs from Ray "The Singularity is Near" Kurzweil's presentation at Harvard Business School's Cyberposium this weekend. Kurzweil whipped up these charts to show how awesomely fast cultural, biological, and technological progress can accumulate.
The upshot is that ideas which seem far-fetched--like the possibility of extending human life indefinitely, cybernetic implants, and nanobots in our bloodstreams, all of which Kurzweil has predicted--could be upon us sooner than you think, because change is accelerating at such a mind-blowing pace that our linearly-predisposed brains have trouble comprehending it.
It's important to note the the chart is doubly logarithmic--moving along the x or y axis moves you by powers of 10. If the graphs were on a linear scale, they would look like a nearly flat line, followed by an exponential explosion of progress.
Lest you accuse the man of cherry-picking biological and cultural landmarks to fit his line, Kurzweil calls in some outside experts:
Download the whole presentation here.
Read more about good times with Ray Kurzweil here and here, and--most recently--here.
Check out reason's interview with Singularity speculator Vernor Vinge.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
This has to be an old presentation to get Sagan's input.
He references a conference in 2002. It may be that Sagan had a similar list in some of his writing.
Actually, if it's linear on a log-log scale, the relationship satisfies a power law, not an exponential one. But I'm being pedantic--the rate is still pretty impressive.
So they speak of reverse-engineering the brain by 2029. By 2010 computers will disappear? Insane stuff.
But 20 years ago if someone told you that they would have a phone the size of a credit card that has a built-in answering machine and can be used virtually anywhere...well I hope I can live to 2029 to upload my stuff into a brand new clone.
I also love how both axis represent time. It's one thing to see similar logarhythmic chart of time versus speed, or time versus computing power, but time plotted against time? They may be different breakdowns of time, but both axis are still time. This only magnifies the obvious that events closer to our present are more meaningful to us. Duh.
First, we still don't have a cell phone the thickness of a credit card. At least, not today. Second, such a thing would not have been considered crazy. You younguns underestimate us old farts. We already knew twenty years ago that electronics were getting smaller and smaller. We may not have predicted the exact color schemes these phones would have (chocolate?) but we did predict that electronic devices in general would continue to shrink at the rate they did.
This has to be an old presentation to get Sagan's input.
Really, why is that? Because he took all his data to the grave with him?
Basically what this tells us is that whatever is possible will become true rather soon.
Now we just have to hope that the things we most desire aren't impossible. 🙂
And that the things we fear most are avoidable.
This is food for thought. Obviously we don't have much time to digest it.
Chuck,
Good observation but there's more than just that going on.
Linear relations show up as linear on a log-log graph.
y = a(x^b)
ln y = ln x^b + ln a
ln y = b ln x + ln a
if b=1 (linear relationship)
ln y = ln x + ln a
Not that Kurtzweil is wrong, just that his graph design is insanely confusing.
He's comparing major developmental events to time transpired by graphing the inverse of the rate of event occurence vs time (dx/dy over x).
Now if you have an exponential relationship:
y=a^x
dy/dx = (x ln a)^-1
dx/dy = x ln a
ln (dx/dy) = ln x + ln ln a
so graphing ln dy/dx over ln x does generate a linear relationship when y = a^x. But that's a lot of math for zero additional explanantory power.
(Yes I used natural logs despite the graph being in base 10 to clean up the math. Got a problem with it? :P)
well I hope I can live to 2029 to upload my stuff into a brand new clone.
But what if your clone doesn't want your stuff loaded into it? What if it prefers to just have it's own stuff and keep your stuff out? Does it have a presumptive right to this, just as if it were a stranger or your child? "Just because I'm a clone doesn't mean I don't have a right to live my own life!" (These questions are spurred by the "Human rights issues of clones? What human rights issues of clones?" thread lower down.)
MattXIV:
Knock it off. I'm trying to shoot my beer cans over here.
Obviously, grow a clone without a brain or just upload to another format.
Of course, even if all the Singularity stuff pans out, I have a feeling that afterwards the super-intelligent Lifeforms Of The Future will end up looking at a chart much like this, just with a few more data points in the lower-right corner.
I am proud to own a signed original hardcover of "The Age of Spiritual Machines:When computers exceed human intelligence".
If you've never read it, DO IT.
Thanks to Kurzweil I'm currently persuing a Phd in A.I.
But what if your clone doesn't want your stuff loaded into it? What if it prefers to just have it's own stuff and keep your stuff out? Does it have a presumptive right to this, just as if it were a stranger or your child? "Just because I'm a clone doesn't mean I don't have a right to live my own life!" (These questions are spurred by the "Human rights issues of clones? What human rights issues of clones?" thread lower down.)
Whoa, I think I just had blog-deja vu for the first time. Or was it the first time? Hmm...
Whatever one makes of Kurzweil's projections, transhumanism does not necessitate subscription to ultra-optimistic timelines. Transhumanism is just a term used to describe a set of morphological goals and interests dependent upon possible future technological developments. Usually this encompasses a maximizing of lifespan and intelligence, and/or optimizing of one's environment in service of certain ends.
Obviously, grow a clone without a brain or just upload to another format.
Like a Macintosh? 🙂
Seriously, you might need a genetic duplicate of your own brain "hardware" to run your own "software." I vaguely remember reading some research in the 1970s that the process of encoding and retreiving memories (and "thinking") had some kind of code that was unique to each individual brain. (As an aside, the writer wondered if this might be connected to reports that "telepathy" was more common between identical twins.)
It was a long time ago, and I have only a vague memory of it, and the research might even be outdated by now, but I remember the core idea was that each individual had his own "brain code," as if every brain ran on its own "programming language" or something.
Thanks to Kurzweil I'm currently persuing a Phd in A.I.
Sean, I hope you're not disappointed with the results. 'Cause I know I was (and am).
Ray came and spoke at Microsoft last year and presented these same charts. We softies were not impressed. He's just picking data points to fit a graph. It is NOT objective data.
Obviously, those events nearest to you are considered the most important. That's all his graphs "prove".
A very good case can be made that progress has decelerated since the late 1800's. More real progress which had real impacts upon human life occurred between 1880 and 1930 than occurred between 1930 and 1980...
Paul:
Funny you brought that up. Minsky is #2 in my book right behind Kurzweil - and I completely agree with him.
Some have fallen into the trap of making cute robots like the south koreans and japanese... but they're hardly intelligent.
Crap this sucks... I can serious talk for hours on this subject- but i'll leave it at this:
The problem with AI, is that right now, it's based on 'solutions'. A.I. solves a small problem, and then it's no longer considered A.I. anymore. It's just a solution.
Yet, We're on the right path. Hell, all 220 billion dollars worth of Google is based on smart intelligent algorithms (for the most part).
How about a Creationist's version?
Actually, I have always generally agreed with guys like Kurzweil...AI is coming, and when it does, things will change. He may be overly optimistic, but as a 32-year-old, I expect it in my lifetime. We WILL make machines that are "smarter" than us. They will, in turn, make machines smarter than themselves. There will be a convergence of the biological and non-biological...and beyond that, it is anyone's guess as to how human society will evolve.
Sean:
The problem with AI, is that right now, it's based on 'solutions'. A.I. solves a small problem, and then it's no longer considered A.I. anymore. It's just a solution.
These discussions can become heated, and you're correct, in general. But my description of these solutions is they're compartmentalized in nature. They're really fast desktop calculators with more memory. What Minsky has been trying to say-- and it's been falling on deaf ears, IMHO-- is that the original goals of AI have been sidelined. That what we were talking about in 1960 when we referred to as AI has been largely abandoned.
Our calculators are getting faster, and their storage is getting bigger, but the binary processing of the last fifty years hasn't changed at all. we're simply able to process more "yes/no" questions and answers than ever before. This is evolutionary, not revolutionary.
I'm very familiar with the debates between artificial intelligence and artificial life. One person wisely posited that what we call AI these days is an accident of history.
Like the next person, I'm always excited by the increase in computing power. Moving bits faster means I can watch video, then bigger video, then more high resolution video, then high def video, then over the internet, etc.
Storage and speed are a great thing. But Ray Kurzweil needs to take his stupid chatbot off his page, because it's embarassing. 30 years of chatbot research and they still open up by asking my about my mother.
But when will I get my Japanese sex robot? All of that other crap really doesn't say "progress" to me.
Chris S.
You can have your japanese sexbot now. In fact, for 1/100th the cost, you can have a latex one that blows up, no batteries required.
And, the conversation you'll have with it will be ten times more intelligent than the one you'll have with "Ramona", Ray Kurzweil's dumb-assed chatbot.
But when will I get my Japanese sex robot? All of that other crap really doesn't say "progress" to me.
Chris S., here she is. No need to thank me.
Hey, my name is Chris S too...
But anyway, where are the:
DOODZ, HAVE YOU SEEN TEH ALEX JONEZ ENDGAMEZ!!!1!!one!!11!eleven!!!1!!
IT IZ AL ABOUT DIS STUF!!!!
ENDGAME
I'm not sure if I believe in this singularity but there's a point of it that I think about when reading these comments.
That is, beyond the time when machines and cybernetics eclipse humanity, it's really impossible to know what happens by definition. Our minds don't fully comprehend it which is a large portion of the point.
Arguements about large portions of what will happen seem to become mute points because of this.
"Mute point" is either a great pun or a great eggcorn.
Where's my flying cars and jetpacks, huh Ray? Ain't the future if theres no flying cars or jetpacks!
I'm so glad I'll be long dead while you nerds are playing Tetris through all eternity.
Eh, I'm still a bit skeptical. Yes, it is true that scientific discovery has become more advanced ever since the Scientific Revolution (which ironically, doesn't seem to get a mention on Ray's little chart). Prior to the mid-1500s, most of our science was based off the work of the ancient Greeks and Romans (Aristotle, Galen, etc.). After the Scientific Revolution, however, discovery became more common, and has remained so ever since.
Thus looking at ALL of our history, progress DOES seem to move exponentially. After the Scientific Revolution, however, that exponential growth seems a lot less dramatic. Just look closely at the graph, and you'll see that the time peroid from TV/transistor radio to computer is roughly similar in length to the period from computer to personal computer.
According to their own data, then, the change is not as dramatic as they claim it to be. Downright absurd predictions (no more computers in three years? Golly gee!). Of course, this could just be my "linearly-predisposed" brain speaking.
RK's knowledge of neuroscience (or lack thereof) is the primary weakness in his arguments.
Even if we had computers with equivalent computation capacity to a human brain, the concept that we could "upload" our intelligence into it requires that we have some concept of exactly what to upload. Our current knowledge of the basic workings of the brain make it seem unlikely that we could upload into a computer in any meaningful sense.
The mind is part of the body is part of the mind. Taking the body out of the picture would distort the mind away from the "I" in a pretty fundamental way. We might be able to create an artificial intelligence that shares a substantial portion of information with you, but that intelligence would be it's own "I," and certainly not a way for you to exist beyond your body.
Fun to think about, however.
AI research is like trying to reach the moon by climbing the tallest tree. It will get you part way to your goal.
Curses, I need to make a couple of corrections. My second sentence should be:
"Yes, it is true that scientific discovery has become more advanced every year since the Scientific Revolution (which ironically, doesn't seem to get a mention on Ray's little chart)."
My second to last sentence should have "..don't help their case either." after the parentheses.
The mind is part of the body is part of the mind. Taking the body out of the picture would distort the mind away from the "I" in a pretty fundamental way. We might be able to create an artificial intelligence that shares a substantial portion of information with you, but that intelligence would be it's own "I," and certainly not a way for you to exist beyond your body.
There's no consensus on this matter. To the extent that we can reduce these arguments from philosophy of mind to neuroscience, questions regarding the continuity and integrity of "I"ness undergoing functional duplication may be answerable.
Some good books to read:
Singularity is Near by Kurzweil
Ending Aging by Aubrey de Grey
On Intelligence by Jeff Hawkins
Also check out:
Kurzweil video at MIT
http://mitworld.mit.edu/video/327/
Aubrey de Grey video at TED
http://www.ted.com/index.php/talks/view/id/39
See by yourself, make up your own mind.
Not one,
There's no consensus on this matter.
Of course. That is part of my point.
Neuroscience is not even close to the point whereby it will be able to upload your mind independent of your brain. It would require a fundamental breakthrough. There is no reason to believe that this breakthrough will occur on pace with the advances in computer technology/AI.
A workable machine intelligence does not have to operate on the same principles that our intelligence operates on. Transfer between the two may be fundamentally incompatible depending upon the particulars.
I lost my fear of self-aware machines taking over the world when I realized that no one can write a bug-free program, therefore no one can originally program a machine to program itself without errors.
Heck, we can't even get a web browser to run without errors, and we're worried about them programming themselves???
Anonymoose,
Of course, if the programs are evolved rather than programmed by a human, those errors may be the source of the innovation that leads to an intelligent machine (think of the role of random genetic errors in creating us). Many AI folks are working on hybrid Darwinian/Gibsonian protocols to evolve intelligent autonomous agents. Sure, they are only at the stage of, let's say grasshoppers, at the moment, but this line of research is the most promising, imho.
Neu Mejican,
Well sure, the genetic analog looks promising. We have at least one empirical example that it works, depending on what you believe. But what will happen to the program mutations that are unsuccessful? How will fitness of automatons be evaluated? More importantly, who (or what) determines the fitness of cybernetic augmentations? Bigger better faster humans are one thing. More acquisitive, more arrogant, more elitist humans appear more likely to me, but seem to be rarely considered by the futurists.
Think "Terminator", not "Star Trek". I have long believed that anyone working on development of autonomous systems should be forced to watch "Terminator" or a similar film, just to remind themselves of what they DON'T want to do. FWIW, I feel the same way about forcing those working on nuclear weapons to watch "Dr. Strangelove" or "Failsafe".
Neu Mejican and Anonymoose,
Predicated on the assumption that Strong AI is possible, the difficulty may lie in developing an agent with goals and interests hospitable to our existence and with ends that further our own.
Groups like the Singularity Institute for Artificial Intelligence struggle over these potential challenges.
Anonymous,
The movie I would recommend along those lines is "Gattaca." Sure they are talking genetic enhancements, but the principle danger is the same.
And of course, there is the whole Borg thang as well.
Movies as life lessons: All parents should watch "Kids" before their children turn 10. All kids should be forced to watch "Requiem for A Dream" before age 12.
How will fitness of automatons be evaluated?
Reproductive success?
not one,
the difficulty may lie in developing an agent with goals and interests hospitable to our existence and with ends that further our own.
And people think inter-racial politics is hard. Imagine the debates regarding inherent rights when you can identify the creator (man or machine) for a particular individual.
My stoner friends in college used to watch "Requiem for A Dream" a lot. While high. I don't think they were doing it for irony's sake either - they watched it too many times to not actually like it. I think it was due to the somewhat "trippy" visual style and the soundtrack. In fact, the ones that liked it the most were the same ones that got into meth later that year...
On the other hand, I don't think any of them has ever sucked a dick for coke, so it may have worked on some level.
I want one of the Singulartarians to download their consciousness into the giant Jason Taylor robot and either destroy London or get the Miami Dolphins to win a game.
http://www.youtube.com/watch?v=5ijr1_6Q6C0
I need to see some hard evidence here people. The destruction of a major world city should do it.
Extrapolating from these trends, we see that all important events will take place in the next 5 minutes. Hooray! Oh wait, then the world will end.
Seriously though, I find it hard to take seriously any graph which places "development of Eukariotic cells" and "personal computers" on a continuum.
Two points. First, how do you know the "time to next event" for the LAST event? You need this to plot the point's vertical coordinate. What comes after "personal computer"?
Second, this chart would look much the same (except maybe for the last point, which as noted above is questionable) if plotted not as "time from present" but as "time from 2030" or "time from 2050" or "time from 1980" for that matter. Broadly speaking it says that things may be speeding up but it doesn't tell us when we are going to reach the asymptote (aka Singularity).
There is no singularity but the singularity and Kurzweil is its prophet. Behold the technorapture cometh and Ron Bailey is fit to loosen its shoe strap...