Rainbows and Artificial Intelligence
reason contributor John Tierney chats with Vernor Vinge, coiner of the term The Singularity, in today's New York Times. They discuss Vinge's most recent book, Rainbows End, in which an old fogey's Alzheimer's is reversed in 2025 and a man from the age of email has to learn to cope with Internet-enabled contact lenses and GPS clothing. Vinge also offers some tips on Tierney's blog for staying on-board as our machines get smarter (and smarter than us).
Allow human/computer teams at chess tournaments. This has also been suggested by Garry Kasparov. It still seems to me that allowing such entrants in human tournaments need not be obtrusive, and would ease the general acceptance of the symbiosis idea. It would also be interesting to see if top players came to recognize that such teams displayed a new style of play, different from the styles of pure human and pure machine competitors….
Develop human/computer symbiosis in art. Of course, parts of this are being deeply exploited. However, we're still missing a very important possibility and this is collaboration closer to the point of creativity itself. Karl Sim's "picture breeding" was a super example of this: The program would generate a screen full of abstract art thumbnails and the user (artist) would select particular thumbnails to be the "seed stock" for the next iteration of the process. In 15 minutes, an ordinary person (such as myself) could generate abstract graphics that were as attractive (well, to me at least) as the best commercial art.
reason interviewed Vinge last year. Tierney's reason contributions here. Get some Internet-enabled contacts here.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It's a great book, read it a month or so ago. One of the better attempts at envisioning a near-future scenario.
I loved Fire Upon the Deep and Deepness in the Sky, Rainbow's End not so much.
For the truth, average me and TallDave. 🙂
Add an A to the first 2 titles in my first post.
I eagerly anticipate wetwiring. I already follow Einstein's rule of "I never memorize anything I can look up in a book" but with the added resource of the internet; having wetware to access information directly would be a total win.
I saw Vinge speak at the Sci Fi museum in Seattle, and he's a pretty impressive guy. Wicked smart, and I'm sure I only got a fraction of what he was saying.
I tried reading Fire Upon the Deep and didn't care for it, or at least Vinge's writing style. I never went back to anything else he wrote.
I liked Deepness in the Sky but am not familiar with much of his other work.
I eagerly anticipate wetwiring.
I'll wait till it comes out of beta. You can be my guinea pig, though.
lmnop,
Having read Deepness without having read Fire is interesting. If you werent aware, Deepness is a prequel to Fire, although the two are entirely separate. There are some things in Deepness that are better understood if you have read fire. Like the source of the cavorite. And the results of Pham Nuwen's final mission.
A line from planes, trains, and automobiles is going thru my head right now.
Progress on the singularity: See the 'dial a human-- avoid annoying computers' thread below.
I'll wait till it comes out of beta. You can be my guinea pig, though.
Sure, unless it's made by Apple. First adopters always get screwed by them.
Im never using Microsoft brand implants. I dont reboot my brain.
OMG dude, I havent been to a Chess tourney in years! Now THAT sounds like FUN!
JW
http://www.useurl.us/17n
Well, our spammer was kind of on topic, if you read the TierneyLab How to Get Smarter piece.
I thought Epi would be all over the human/computer chess teams idea.
robc: There is no apostrophe in Rainbows End.
The basic problem with all of these Singularitan arguments is that they depend on the notion that we'll get something functionally close to artificial intelligence within the next, say, 20 years. But we aren't fundamentally any closer to AI than we were 20 years ago. It's by no means clear that we're on track to get it.
I thought Epi would be all over the human/computer chess teams idea.
It's neat, but much more interesting would be "fast" games played with wetware interfaces. Imagine the frenetic blowout of a 200-player deathmatch played without the need for using your keyboard, mouse, or controller.
It's really in speed that it would be interesting. With enough time to make moves, a human (Kasparov) can beat a machine (Deep Blue) in chess.
Too soon. Way too soon.
Probably, but these "computers" will probably a far different thing than the binary processors we refer to as computers, today.
Already happening, depending on how you define it.
Define "edge".
Another interesting perspective.
We are far closer to AI then we were 20 years ago. You can't create a human level AI with insect level processing.
We are far closer to AI then we were 20 years ago.
One of the problems is the goal-post to AI is constantly being shifted. We went from calling systems "intelliegent" and started calling them "smart".
AI is not going to happen on the linear processors we have now. It will take some kind of revolution in parallel processing or quantum processing to get to AI. However, I have a feeling that it will happen very quickly after such a revolution.
revolution in parallel processing or quantum processing to get to AI.
Parallel processing? I have my doubts. Many dumb processors running side-by-side, does not, in my opinion a smarter computer make. Quantum processing? Possibly.
Another perspective on faster and better hardware:
http://www.engagingexperience.com/ai50/
Having read Deepness without having read Fire is interesting. If you werent aware, Deepness is a prequel to Fire
I wasn't aware. I shall have to check that out! Thx.
AI is not going to happen on the linear processors we have now. It will take some kind of revolution in parallel processing or quantum processing to get to AI.
Personally I think it will have more to do with advances in asynchronous processing and more robust feedback/feedforward architectures.
I eagerly anticipate wetwiring. I already follow Einstein's rule of "I never memorize anything I can look up in a book" but with the added resource of the internet; having wetware to access information directly would be a total win.
Yeah, but it's blowtastic for those of us who are naturally gifted with a retentive memory. Besides that, do you really want to have to listen to a bunch of idiots spout off whatever incorrect information comes through their brain jack because someone vandalized wikipedia?
But it's an idiot-savant: it plays brilliant chess, but it can't do anything else
Most AI attempts that succeed will be for special purposes. We won't start with Wintermute.
I got into an argument about 20 years ago with someone on the subject of whether or not computers would someday be able to think. The other guy said absolutely not.
I finally asked him "do birds fly?", yup. Then I asked him "do planes fly?", pause then yup. Then I asked him "do planes do what birds do?", which got no response. But at least he shut up.
Personally I think it will have more to do with advances in asynchronous processing
Binary processors working asynchronously? We have that now, sometimes referred to as distributed processing. Or is there another definition of which I'm unaware?
Paul --
I'm talking about what you might call 'catastrophic asyncrony'. In other words, no overall governing clock.
Much like how a brain works. That is, there are coherent signals that percolate through the neuronal net, but each process does not run strictly in parallel and is not governed by an overall clock.
We won't start with Wintermute.
My opinion, which gets flamed by "visionary believers" is that we won't end with Wintermute either. Not with faster, parallel (or otherwise) binary processors. We can make a program or a computer "clever", but that's a far different thing from making it understand that "water is wet", to borrow from the visionary AI researcher Minsky.
LMNOP:
That is, there are coherent signals that percolate through the neuronal net, b
Back in my day(?!), I was a big neural net fan. I've cooled on the concept as of late, though.
'catastrophic asyncrony'.
That's what tends to happen when my wife gets complete control of my schedule.
Vinge's original use of the word "singularity" was that the rate of change is increasing, and it will eventually reach a point where we cannot extrapolate beyond it. That "singularity" will never happen, because human nature stays the same. Human beings can only adapt to change so fast. Children can adapt much faster than adults, but there is still a limit. If we ever did hit a singularity, where we could not be sure what society would be like the next day, we would not suddenly enter some unimaginable quasi-paradise, but see instead the collapse of civilization.
My problem with the term "singularity" is that it's squishy, it leaves too much to interpretation-- it's a concept word. We could be sitting around, minding our own business, living exactly like we did two days ago, a week, a month, even a year ago, and one of these gurus could stand up and say "See? I told you. The singularity is here!"
Begin debate on who the first real "punk" band was.
Laugh all you want, but it's no fun having a T triple 8 on your ass.
So, Deep Blue: it doesn't play brilliant chess. It plays very fast chess. Deep Blue isn't really any better at chess than J. Random Chess Program that you can get at the store, it's just that its very impressive parallel architecture allows it to analyze many orders of magnitude more board positions than your home computer does. That, in turn, gives it a deeper look-ahead than your home computer has.
In chess, as it turns out, quantity has a quality all of its own. But that's not generalizable. I don't just mean that you can't get Wintermute from Deep Blue, you can't even get a computer that plays Go well from Deep Blue.
I see people here repeating a general fallacy that says, "Once we get our hardware fast enough, AI will just happen." Of course, I can't prove a negative, but the strong indication is that that's not true. Our hardware has gotten a lot faster since people first started dreaming up AI -- not, like, ten times faster. Hundreds of millions of times faster. And we don't really have anything like AI. Again, it's not that we don't have Wintermute, it's that we don't have something which can mimic an IQ 75 retarded person. We don't even really have anything which can mimic a dog. Arguably, we don't have anything which can mimic an ant. We probably have enough processing power to mimic an ant, but we don't know how to put it together.
That's the fundamental problem that we haven't really made any substantial progress on: nobody knows what to do with all the processing power. There was some hope that you could just wire everything into a neural network or some other learning architecture and AI would "just happen," but not only has that not happened so far (despite a lot of attempts), but we haven't even really seen anything particularly encouraging happen which falls short.
That's why I'm deeply skeptical of claims that AI is just around the corner. It's been just around the corner since, like, the '60's.
JW,
You might give Rainbow's End a shot. It's a very different style. It's a bit like something you'd expect from Stross or Scalzi.
I don't just mean that you can't get Wintermute from Deep Blue, you can't even get a computer that plays Go well from Deep Blue.
I don't see why not. It's just a different set of rules.
I see people here repeating a general fallacy that says, "Once we get our hardware fast enough, AI will just happen."
That's the typical sci-fi fallacy. It won't happen accidentally, it will require replicating the programming we inherit from hundreds of millions of years of evolution. That will be a huge, huge task.
That's the fundamental problem that we haven't really made any substantial progress on: nobody knows what to do with all the processing power.
Sure we do. There are all kinds of AI projects working on code to replicate the function of mamallian brains (DARPA is currently funding a program to produce a working imitation of a cat brain). A lot of it is needed just for visual processing, which your brain does a lot of without you noticing. Something that seems simple to us, like differentiating a child from an animal or a wet cloth from a dry one, turns out to actually be very complex.
You can find dozens of things that are pieces of AI which have been developed over the last 30 years and are now commonplace: synthesizing speech, recognizing and understanding speech, optical character recognition, facial recognition, a chess program that can beat a grand master.
That's why I'm deeply skeptical of claims that AI is just around the corner. It's been just around the corner since, like, the '60's
Yeah, they were way off. But we're undeniably a lot closer now.
The thing about AI is you just can't program one. It's far too complex. The only way I can see it happening is to build essentially an "infant" AI that is capable of learning and advancing itself, and then start to "teach" it things and let it build itself up.
This would be fascinating, but you'd have to essentially write code that can write more code for itself. That's virtually impossible.
lmnop,
robc: There is no apostrophe in Rainbows End.
No comment on the irony involved in that?
Also, while Deepness is a prequel to A Fire Upon the Deep, dont assume they are even remotely similar. They are many many thousands? of years apart in very different parts of the galaxy. Pham Nuwen is (sorta) the only character in common.
Stylistically, however, they are similar, at least to me.
Also, for those who have read Fire - I have a plant with its own skrode. Its a lesser skroderider, it just has wheels, it hasnt figured out how to control them yet. 🙂
Epi,
It depends on what you mean by AI, or want AI to do.
A full replica of a human brain, along with all the hormonal inputs, is a mammoth task requiring hugely parallel processing. We balance tens of thousands of competing compulsions, and what we think of as our consciousness sits in our cerebral cortex thinking of ways to satisfy the hindbrain. Just processing all the visual and aural input is an immense task.
There just isn't much of a market for such a thing; generally we want computers to do specific tasks, so that's what chipmakers are good at making.
but you'd have to essentially write code that can write more code for itself. That's virtually impossible.
Nah, that's not that hard, really. Computers already modify their own code without much help via things like anti-virus programs, which might be analogous to a human being kicking a bad habit.
You might give Rainbow's End a shot. It's a very different style. It's a bit like something you'd expect from Stross or Scalzi.
I actually might. I read a bit of the Google book that was linked and liked it much better.
Of course, I have to get through the last 1,000 pages of Peter Hamilton's Dysfunction Reality tome before then...and Scalzi's Lost Colony just came out in paperpback...and I have to read his Android's Dream after that...but Scalzi goes quickly.
I think I still have another book lying around somewhere that I forgot I bought too. THEN I'll be able read Rainbows End.
A full replica of a human brain, along with all the hormonal inputs, is a mammoth task requiring hugely parallel processing. We balance tens of thousands of competing compulsions, and what we think of as our consciousness sits in our cerebral cortex thinking of ways to satisfy the hindbrain. Just processing all the visual and aural input is an immense task.
Yeah, but I thought that neurobiologists had fairly conclusively demonstrated that the human brain fudges about 90% of its perceived inputs. That's why I believe the key is in complete asynchrony; the visual system picks up an image in little pieces, but the conscious routine (whatever that may be) doesn't bother to wait to see if the image is compiled before calling the operation. The visual cortex feeds forward the first-pass input (such as it is) but also feeds back to itself to refine the image. Neither process waits for the other under most circumstances.
Computers already modify their own code without much help via things like anti-virus programs
No they don't. Windows doesn't modify its own code. Ever. A team of humans at Redmond do. Some programs, like voice recognition, build up a learned dataset for recognizing speech, but they also do not modify their own code, they merely modify the dataset.
This means that the logic of the programs never changes without external interference, only input data. A learning AI would have to be able to modify and create new logic routines for itself. That's a big deal.
TallDave writes: "I don't see why not [make a Deep Blue-like computer that plays Go well]. It's just a different set of rules."
Because Deep Blue is lookahead based. So, looking ahead has an order of complexity which is exponential on the number of moves that you can make in any given turn. Chess generally has something on the order of 30 legal moves per side per turn, so you're looking at an algorithm that's basically 30^n, where n is the depth that you want to look ahead. Deep Blue was looking ahead roughly 10 moves during the first time it beat Kasparov.
Go starts out with 400 legal moves, and that number slowly decreases. For a typical mid-game move, you probably have something on the order of 300 legal moves. So, to look ahead 10 moves, you need 300^10, which is 10^20th times more complex than an equivalent lookahead in chess (also, you probably need a longer lookahead to perform comparably, but whatever).
Even if Moore's Law continues without end, it'll be well over a century before we have computers 10^20th times faster than Deep Blue.
All of this points out that Deep Blue is a gimmick. It's a cool machine, but it doesn't play chess the way that humans play chess, and the way it plays chess is not generalizable to a wide class of intellectual problems that we might like to solve.
Oh yeah, it definitely fudges; that's why those optical illusions look so weird to human eyes
http://dogfeathers.com/java/spirals.html
Your visual cortex is trying to predict what will happen next and mixing that in with what you actually see. You can imagine how much processing power that takes.
I agree, asynchrony will probably be a necessary component.
True, Go favors a massively parallel, massively interconnected processing system like ours.
But computers do manage to play Go fairly well:
Go poses a daunting challenge to computer programmers. While the strongest computer chess hardware has defeated top players (for example, the IBM computer Deep Blue beat Garry Kasparov, the then-world champion in 1997), the best Go programs only manage to reach an intermediate amateur level. On the small 9?9 board, the computer fares better, and some programs have reached a strong amateur level. Human players generally achieve an intermediate amateur level by studying and playing regularly for a few years. Many in the field of artificial intelligence consider Go to require more elements that mimic human thought than chess.[81]
Also: Sure we do. There are all kinds of AI projects working on code to replicate the function of mamallian brains (DARPA is currently funding a program to produce a working imitation of a cat brain).
And if the DARPA program has a useful output, then I'll reconsider my position, but right now, we don't know that it will. The process by which we assign meaning and links to our sensory input and produce a command output remains basically opaque to us. Oh, we think we have an idea of how things work on the neural level, but how that all gets put together? We don't know. So our real AI research (as opposed to side projects like Deep Blue, which, it was clear from the start, wouldn't be particularly illuminating of any AI principles, though it was an interesting engineering challenge) is limited to things like, "Well, we'll try to replicate this thing from nature and hope for the best."
And, who knows, maybe it will bear fruit. But we don't know that it will. It's a shot in the dark. It's entirely possible that that DARPA project will yield absolutely nothing. That's unlike problems like, say, building a car that drives itself, where it might fall short of our ultimate goals, but we know basically how to do better than we currently are doing.
Until we have some idea of how that whole thinking/consciousness thing works, we're limited to shots in the dark or things like, "create a big neural net, give it a lot of inputs, and hope for the best." They may work. But we have no particular reason to think that they will.
I always like the idea of developing true AI through evolution simulation. Worked for us, didn't it? And we can ramp up the speed of the evolution, of course, since we don't have four billion years to wait around for a computer that can paint a new Van Gogh. Or rule us with perfect enlightenment, which is what we're really after--divine help.
The process by which we assign meaning and links to our sensory input and produce a command output remains basically opaque to us. Oh, we think we have an idea of how things work on the neural level, but how that all gets put together? We don't know. ...Until we have some idea of how that whole thinking/consciousness thing works
Well, I don't know that it's opaque, or even that mysterious. Complex, yes.
When you look at it from the perspective of a system acting under a large number of commands, it's not that hard to understand. You put your hand on something hot, you receive pain input, your compulsion to avoid pain results in a command to remove your hand.
Of course, the actual decision process is far more complex: you might also modify your reaction to fulfill your competing compulsions to achieve social status by appearing indifferent (or at least casual) to pain, or perhaps you are putting your hand near a hot flame to flip over a burger in order to eat.
The command would proceed from the relative intensity of the compulsions and the inputs relating to them, e.g. if you're very very hungry you might endure a lot of pain to eat.
One thing that most non-programmers don't realize is that we can code just about anything, given enough time. But there's always the constraint that your program has to run in a reasonable amount of time.
I'm sure there's no computer system today that can handle the neuronal processes alone, let alone mimic all the hormonal feedback, in anything like real time. Given a few million programmer hours, we might be able to write most of the code for a human brain's processes, but I'm guessing it would take years or decades to decide to stand up.
Given a few million programmer hours, we might be able to write most of the code for a human brain's processes
Maybe for stuff like vision, but what about love?
I am a programmer, Dave, and it would be such a colossal undertaking as to be essentially impossible. Like I said, the only thing that is feasible is a self-programming learning AI. But how do you write code that writes code for itself?
TallDave: Pain is pretty simple. When you look at, say, dog that reminds you of the dog that you had when you were a kid, and you're wistful, but you look at another dog and you're annoyed or maybe wary, and what those things actually mean in terms of your actions (as opposed to something simple like "remove hand from source of pain"), is pretty damn complex and opaque.
But, actually, even the pain one isn't that simple. I mean, yes, you receive pain, and that indicates that something is bad. But what, and what do you do? If your hand hurts, you might jerk it away from whatever you're touching, but maybe it hurts because of some internal process, or because your bracelet is too tight. Or maybe when you jerk it away from what it was touching, it's still in an environment that is harmful -- like, you put it onto a hot stove, and now you're touching a different part of the stove. Your mind parses your sensory inputs into meaning, so that when you cut yourself with a knife, you understand that there's a knife, and that's the sharp part of the knife, and how that whole thing is different from putting your hand on the stove, and you know what the stove is and what the counter is. And you know that if the counter is so hot that it burns you, that that means something different from if the stove is so hot it burns you.
You seem to think that this is a problem that just requires a lot of data entry. You're wrong. Trust me, or go find a cognitive scientist and ask him or her: we really don't know how all this works. People have theories, but nobody knows how true they are.
What we need, Igor, are programmable brains. Yes.
PS: I'm also a software engineer, and have an educational background in computer science. Our disagreement has nothing to do with my not knowing how computers work. I'm not an AI specialist or anything, but I have a relatively informed, educated opinion for someone outside the field.
Maybe for stuff like vision, but what about love?
Love is just a word, usually employed to describe some combination of the social compulsion, the mating compulsion, and the compulsion to perpetuate your genotype.
But how do you write code that writes code for itself?
Almost all code is actually written by other code, unless you program in assembler.
Humans don't rewrite their raw nucleotide sequences or hormones, yet we manage to learn and adapt. You don't really have to rewrite your own code much to learn; generally, new heuristics are built from experience and imitation.
How did you learn English? You copied a complex pattern from other people, a pattern built by millions of humans over thousands of years.
I am a programmer, Dave, and it would be such a colossal undertaking as to be essentially impossible.
Shrug. I work on massively complex programs all the time. I don't see humans as being that challenging conceptually, just a large resource problem.
Almost all code is actually written by other code, unless you program in assembler.
That's ridiculous. All code is written by humans. It may use other code to make it more human-friendly, but my compiler/interpreter does not write any code, it just transforms the code I write into assembly.
You don't really have to rewrite your own code much to learn; generally, new heuristics are built from experience and imitation
No, because we are not computers. A computer would have to rewrite and expand its own code because it is not an organic life form, based on a code of DNA. We are built off an instruction set--an indescribably complex one. An AI does not have that, so it would have to build one up.
You seem to think that this is a problem that just requires a lot of data entry.
Data entry is not the same as programming.
You're wrong. Trust me, or go find a cognitive scientist and ask him or her: we really don't know how all this works.
I know quite a few cognitive scientists who disagree and say you're wrong. Complexity does not make an unsolvable problem, just a very difficult and resource-intensive one. Check out some of the work Ray Kurzweil references.
http://www.kurzweilai.net/index.html?flash=1
Shrug. I work on massively complex programs all the time
Mm-kay. Maybe you could send me an outline for an app that feels love, then. In Visio, please.
Episiarch! You've erred again. The correct response follows:
That's ridiculous. All code is written by humans. It may use other code to make it more human-friendly, but my compiler/interpreter does not write any code, it just transforms the code I write into assembly.
Well, sure, but that's like saying your car doesn't drive anywhere, because you're the one pushing on the gas and steering.
but my compiler/interpreter does not write any code
Only because it isn't programmed to. You, on the other hand, have compulsions (or programming) that cause you do to soo
I wrote something called a "smart browse." It writes new code on the fly, based on what a user requests. It then compiles that code and runs a brand-new program -- every time a user accesses the browse screen.
you to do so*.
So basically we're talking about higher and higher levels of interpetation. With the smart browse, it writes 4GL based on inputs, the compiler makes those inputs into machine instructions. At the top is a prime mover who decides to do something. Humans are always the prime movers because we built everything else, in accordance with our own programming.
Eventually, we'll get to a point where you can just tell a machine "love me" and it will behave in a way that correlates with all the complexity embedded in how you understand the word "love."
I know quite a few cognitive scientists who disagree and say you're wrong.
Oh yeah? Quote one.
Complexity does not make an unsolvable problem, just a very difficult and resource-intensive one.
I didn't say it was unsolvable. I said that we don't know how to do it. I don't have any kind of belief that AI is theoretically impossible: it's just that we don't clearly understand what intelligence is, and so we don't know how to write code which mimics it.
You, on the other hand, suggested that a few million programmer hours could write an AI (maybe one that runs considerably slower than real-time, given current hardware).
Just to quote, it's from this thread: Given a few million programmer hours, we might be able to write most of the code for a human brain's processes, but I'm guessing it would take years or decades to decide to stand up.
That's, frankly, ridiculous, and I don't think that anyone who has any background in the field would endorse it.
I don't see why not. It's just a different set of rules.
And therin lies the fallacy. Simply throwing more rules at really fast binary computers isn't making a computer understand that "water is wet". The "more rules" was the quagmire that researchers got into in the sixties, and never really recovered from.
How did you learn English? You copied a complex pattern from other people, a pattern built by millions of humans over thousands of years.
It is believed that humans have something unique that gives them language. What that unique thing is is of course, open to debate.
That's why the Turing test was invented. Language is, in general, the holy grail. We can continue to make 'smart' systems which are savants, as described above. I admit that I come from the "old school" of ai, the one before the goal-posts were moved. I believe that an intelligent 'machine' can be built, just not today, probably not in our lifetimes, and with vastly different technology than binary processors at its core.
Hey, what tag are you using to get the quotation style that has the inset text with the grey bar to its left?
Please tell me you aren't hand-creating it with a div or a table.
Michael B Sullivan,
The blockquote tag works here. But no blink tags, damn Reason's hide!
Awesome! Thanks.
Shrug. I work on massively complex programs all the time. I don't see humans as being that challenging conceptually, just a large resource problem.
That is a very, very, telling statement.
I am with MBS on the state of our current understanding of cognitive processes. I am more optimistic than he is about AI, I believe, but it is a non-trivial challenge that first has to overcome a lack of conceptual understanding the basic processes which support intelligence and learning. It is not a problem of labor, or processing speed, but of knowledge and understanding.
I am optimistic because I believe we don't need to crack "intelligence" but, simply (!?!) learning. We are closer on that front with the progress in both expert systems and Darwinian approaches being employed in robotics.
It is believed that humans have something unique that gives them language. What that unique thing is is of course, open to debate.
It is most certainly not a simple matter of processing power. And it is certainly not an innate set of rules (ala Chomsky).
How did you learn English? You copied a complex pattern from other people, a pattern built by millions of humans over thousands of years.
This is correct, up to a point. What we don't have a detailed understanding of is how "you copied" the complex pattern, nor how that pattern was built up by those millions of humans. It is certainly an evolutionary process involving the interaction of a number of complex adaptive systems, but we are a long way from figuring out the basic principles that allow that interaction to unfold.
(I do research into how cognitive processes breakdown when the system is damaged...my job would be much easier if we understood how things work as well as TallDave implies)
Primarily because it is nearly impossible to separate language from intelligence, imho.
This is correct, up to a point.
A very limited point. The human mind is that unique piece of non-binary processing hardware. There's something in there (to be technical) that allows the perceiving of language patterns to take place and discern understanding. This is why despite 20 years of software development experience, I don't subscribe to the "throw more rules at it" theory of A.I.
There was a great NOVA series back in the 80's -- maybe early 90's-- on AI (I had it on VCR) where they covered the rise of early AI research, and how researchers were convinced they'd have this nut cracked in a decade.
Failure upon failure later, "more rules" kept coming to the fore. A philosophy professor featured on the program did an excellent job of quickly trashing the "more rules" theory. This was all in context of the researchers trying to get the computer to 'understand' a simple children's panel story which contained something like six comic-like cells.
The human mind is that unique piece of non-binary processing hardware.
Mind = software
Brain = hardware
No?
Just being pedantic.
I am not so sure the human mind is that unique. It is just a variation on a general theme.
I agree with your other points.
Speaking of NOVA,
The recent one on primate intelligence pointed out that the main difference between humans and other primates may not be in our ability to learn, but in our ability, and propensity, to teach.
Mind = software
Brain = hardware
No?
Mmm, n...
I think that...
Well it's like this...
Ok well, think of it this wa...
Haven't we been trying to answer this brain vs. mind question for a long time now? I don't think I can say one way or the other. Ask me 20 years ago, I'd have shot out an answer. Probably "yes". Now, not so sure.
not be in our ability to learn, but in our ability, and propensity, to teach.
Interesting hypothesis. I have problems with it on its face, but interesting. I'll have to look into this.
Paul,
I don't mean to imply any sort of dualism with my mind/brain question above.
I believe that the mind is the activity of the brain, but when we talk about "hardware" we are talking about the brain (which technically includes much more of the body than what lives in the skull), not the mind.
It is really more of a language thing than a conceptual thing. Where people typically get screwed around is when they start thinking of thoughts as somehow separate from the activity of the brain (aka, the mind).
For what it's worth, computer science analogies of the brain/mind have been one of the major barriers to progress in AI, imho.
# Michael B Sullivan | August 26, 2008, 1:17pm | #
# ...But we aren't fundamentally any closer
# to AI than we were 20 years ago. It's by
# no means clear that we're on track to get it.
Two comments here. First: Read a book by Hubert Dreyfus (formerly of MIT, now at Berkeley), "What Computer's Can't Do." It is as relevant today as when it was first published in the 1970s. The AI crowd practically spat at him when the book was new. But their hype was shown to be bogus, and his good arguments have withstood the test of time. It was one of the most important books I ever read during my career in the personal computer industry, and I am so glad I read it earlier, rather than later.
Second: Check out what Palm Pilot inventor Jeff Hawkins and associates are doing at Numenta (http://www.numenta.com). They have come up with an impressive software model of (cerebral) cortex, which they are currently evangelizing to researchers and developers, primarily for pattern recognition applications that are more flexible and accurate than, say, hardcoded AI of yore or "neural nets" of more recent years. Assuming that they are onto something fundamental here (and I think they are), the "let a 1000 flowers bloom" strategy they are pursuing will help make up for lost time, by accelerating progress toward true AI. Look at what they have at present, and see what you think. I don't know if we'll have HAL in 20 years, but I very much expect some exciting and eminently useful gizmos from this technology in that time, enough to inspire Dreyfus to revise his book (or write another), if he is still alive and intellectually active at that time.
Jeff Hawkins
I like Jeff Hawkins work.
His theoretical model of the cortex is the kind of thing that is needed to move our understanding of cognitive processes forward. Unfortunately it is clear the cortex is only a piece of the puzzle, with sub-cortical processing being critical to understand the whole.
# Neu Mejican | August 27, 2008, 12:35am | #
# Unfortunately it is clear the cortex is
# only a piece of the puzzle, with sub-cortical
# processing being critical to understand
# the whole.
Agreed, which is why I haven't put my money down to bet on HAL by 2028. Still, we have to take this a step at a time, and so it is important for us to take REAL steps. Most of the steps taken by earlier AI research didn't do much to move the project forward, and indeed often sent it backward or on fruitless goose-chases that wasted time and resources. At least the cortex model work appears to be solid progress in a good direction, though several other developments will need to be made, and the collection of them integrated with cortex technology, to yield anything that might be HAL-like. Long before HAL though, such things as cortex-based associative memories, pattern recognizers and event predictors promise to revolutionize a wide range of application categories and create new categories besides. So I am very optimistic about what the future will bring, much more so than during the golden era of AI hype when I was first starting out in computers...
In a roundabout sort of way, isn't this how "we" can handle "paradoxes" and not go apeshit?
Don't know if anybody added this in the comments yet but it seems apropos given the title of the post.