Will Super Smart Artificial Intelligences Keep Humans Around As Pets?
And other questions from the Singularity Summit
SAN FRANCISCO—By 2030, or by 2050 at the latest, will a super-smart artificial intelligence decide to keep humans around as pets? Will it instead choose to turn the entire Earth, including the messy organic bits like us, into computronium? Or is there a third alternative?
These were some of the questions pondered by the 600 or so technosavants meeting in the Palace of Fine Arts at the second annual Singularity Summit this past weekend. The meeting was convened by the Singularity Institute for Artificial Intelligence. The Institute's chief goal is to make sure that whatever smarter-than-human artificial intelligence is eventually spawned by exponentially accelerating information technology that it will be friendly to humans.
What is the "Singularity?" As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.
The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil's vision of the Singularity, AIs don't take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.
Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." While the three different schools of thought vary on details, Yudkowsky concluded, "They don't imply each other or require each other, but they support each other."
But is progress really accelerating? Google's director of research Peter Norvig cast some doubt of this claim. Norvig briefly looked at past technological forecasts and how they went wrong. For example, in Arthur C. Clarke's 1986 novel Songs of Distant Earth, set 1500 years in the future, the world was going to be destroyed as the sun went nova. So humanity had to cull through all the books ever written to decide which were good enough to scan and save for shipment in starships. Only a few billion pages could be stored and only one user at a time could search those pages to get an answer back in tens of seconds. Norvig pointed out that only 20 years later, Google saves tens of billions of pages and tens of thousands of users can query and answers back in tenths of a second.
Nevertheless, Norvig pointed out that accelerating growth doesn't characterize all aspects of our world. For example, global GDP over the past century has been growing at a pretty steady rate (1.6 percent per year) and shows no sign of acceleration. Same thing for average life expectancy.
Accelerationist Ray Kurzweil replied that generally he is focusing on infotech when he's projecting accelerating progress. In addition, Kurzweil made the excellent point that GDP figures do not account for the fact that most products are vastly more capable than earlier ones. For example, an Apple II with 48k of ram cost $2,275 in 1977 (about $7,800 in today's dollars). A new low-end iMac costs $1149.
So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.
As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.
In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.
However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems.
On the other hand, founder of Adaptive A.I., Peter Voss outlined the advantages that super smart AIs could offer humanity. AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?)
Although Voss' views about AIs are relatively sunny, other participating technosavants weren't so sure. For example, computer scientist Stephen Omohundro argued that self-improving AIs would be ultra-rational economic agents, basically examples of homo economicus. Such AIs would exhibit four drives; efficiency, self-preservation, acquisition, and creativity. Regarding efficiency AIs optimizing their resource use would turn to nanotechnology and virtualization wherever possible. Self-preservation involves protecting its utility function from death which it would do by building in redundancy and embedding itself in mutually defensive social relations. The drive to acquire more resources means that AIs could be dangerously competitive with humans. If Omohundro is right, there are good reasons to doubt that an AI that is a relentless utility maximizer will be friendly to less than perfectly efficient humanity. The drive for creativity enables AIs (and us) to explore new possibilities for transforming and satisfying our utility functions. Omohundro's solution for making AIs human-friendly? Try to teach AIs our highest human values, e.g., happiness, love, compassion, beauty and so forth.
On the question of AI morality, Institute for Molecular Manufacturing research fellow, J. Storrs Hall did a modern take on Asimov's Three Laws of Robotics. Hall noted that Asimov's whole point was that the Laws were inadequate. So what ethical rules might be adequate for controlling future AIs? According to Hall, the problem of setting moral rules in stone can be illustrated by trying to imagine how the Code of Hammurabi might apply to the Enron scandal. (Actually the Code did deal with commercial fraud. Rule 265: "If a herdsman, to whose care cattle or sheep have been entrusted, be guilty of fraud and make false returns of the natural increase, or sell them for money, then shall he be convicted and pay the owner ten times the loss.")
Eliezer Yudkowsky made a similar point when he asked us to imagine what values the ancient Greeks might have tried to instill in their AIs. Surely AIs incorporating ancient Greek values would have vetoed our civilization which outlawed slavery and gave women rights.
Hall suggested that instead of fixed moral rules (which a super smart AI with access to its own source code could change later anyway) progenitors should try to inculcate something like a conscience into the AIs they foster. A conscience allows humans to extend and apply moral rules flexibly in new and different contexts. One rule of thumb that Hall would like to see implemented in AIs is: "Ideas should compete; bodies should cooperate." He also suggested that AIs (robots) should be open source. Hall said that his friend economist Robin Hanson pointed out to him that we already live with superhuman psychopaths—modern corporations—and we're not all dead. Part of what reins in corporations is transparency, e.g., the requirement that outsiders audit their books. Indeed, governments are also superhuman psychopaths, and generally the less transparent a government the more likely it is to commit atrocities. So the idea here is that more AI source code is inspected, the more likely we are to trust them. Finally, Hall also suggested that AIs also be instilled with the Boy Scout Law.
Given these big concerns about how super smart AIs might treat humanity, should they be created at all? Famously, former Sun Microsystems chief scientist Bill Joy declared that they are too dangerous and that we should relinquish the drive to create them. Charles Harper, senior vice president of the Templeton Foundation, suggested there was a "dilemma of power." The dilemma is that "our science and technology create new forms of power but our cultures and civilizations do not easily create parallel capacities of stewardship required to utilize newly created technological powers for benevolent uses and to restrain them from malevolent uses."
Actually the arc of modern history strongly suggests that Harper's claim is wrong. More people than ever are wealthier and more educated and freer. Despite the tremendous toll of the 20th century, even social levels of violence per capita have been decreasing. We have been doing something more right than wrong as our technical powers have burgeoned. (It is worth noting the most of the 262 million people who died of violence in the 20th century died as the result of the actions of those superhuman psychopaths called governments using pretty crude technologies.)
Nevertheless, it is a reasonable question to ask if self-willed super smart AIs are too dangerous to unleash. The IEET's James Hughes suggested that one solution could be modeled on how the world currently handles nuclear weapons. If AIs are so dangerous, perhaps only governments should be allowed to own them. But this doesn't address the problem that governments themselves can be not-so-smart superhuman psychopaths. In addition, it seems unlikely that true human psychopaths (either individuals or collectives) can be permanently restrained from covertly creating AIs. If that is the case, we should all hope for and support the Singularity Institute's efforts to create friendly AI first.
When are AIs likely to arise? Ray Kurzweil, who joined the Summit by video link, predicted that computational power sufficient to simulate the human brain will be available on a laptop for $1000 in the next 15 years. Kurzweil believes that AIs will come into existence before 2030. Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
If the Singularity Summiteers are right, buckle up and get ready for a really fast ride to the future. Let's hope their efforts will keep the ride from getting too rough.
Ronald Bailey is Reason's science correspondent. His most recent book, Liberation Biology: The Scientific and Moral Case for the Biotech Revolution, is available from Prometheus Books.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems. "
You know, you could always just unplug the God Damned thing. More importantly Ron, how far are we from me getting my made to order robotic sex slave, ala Blade Runner?
Nevertheless, Norvig pointed out that accelerating growth doesn't characterize all aspects of our world. For example, global GDP over the past century has been growing at a pretty steady rate (1.6 percent per year) and shows no sign of acceleration. Same thing for average life expectancy.
The Kurzweilians never claimed that everything showed accelerating growth, just information technology (in the broad sense.. DNA being part of that).
GDP is based on a lot of physical stuff that isn't information tech.
John,
Go to Japan, and all your most horrible questions will be answered.
You never know where the wall is until you hit it. For example, if you traced the growth in aircraft top speeds from 1903 to 1930, you could legitimately conclude in 1930 that by the trend, aircraft should be going at insane speeds rountinely by the end of the century. Of course what you didn't know in 1930 was that prop planes are incapable of breaking the sound barrier and aircraft engins use prodigous amounts of fuel. Yeah, planes got a lot faster but the rate of improvement flattened out. Who is to say there are not principles of biology and physics that we don't fully understand yet that will act like a brick wall on the development of information tech?
K: I reported Kurzweil's reply in the column.
Considering the 5-10 year possibilities of such game-changing technology, shouldn't there be more of a debate about these things? DARPA urban challenge is in our sights, AGI is on the horizon, yet the powers that be are mostly still living in the 20th century. They may be using the web, but that only proves that they are being dragged kicking and screaming into a new paradigm. A website with a campaign speech is not cutting edge. Signature gathering AI is. coordinated Denial-of-service attacks are.
Someone needs to come up with some rational questions for the candidates regarding AI, Net Neutrality, Nanotech and bio engineering. I am trying, but I think "what do you think of[insert tech concept here]?" is not enough, and beyond that I draw a blank.
Well The Chad, you could always put "The Internet" down as a write-in candidate.
What about the rights afforded to these hypothetical superintelligent creations? Based on their predicted abilities, they will easily pass the Turing test, or any other such measure of awareness.
What rights will these things have?
Was this addressed at the meeting?
From now on we have to address Mr. Bailey as "Meatbag" or "Sausage Link".
Dog - nice KotOR reference! 🙂
I don't really have a problem with my AI assistant being much smarter than me in some area. If it was to tell me not to have that beer or that I had plenty of time to go exercise and should do so, possibly making such suggestions for optimum efficiency and benefit to me, I would welcome such a thing. Sure, you may be more inclined to be swayed by it's suggestions, but isn't that the point.
Of course, you would need certain "fail-safes" and ways of reprogramming or shutting down the machine.
I don't question that we'll find ways to augment our intelligence, but it won't be the superior intelligence (regardless of how we achieve it) that leads to a better quality of life.
The progress we've made so far hasn't been a function of any increase in intelligence, it's been a function of supply and demand and the freedom to mix resources, etc.
GDP may not keep pace with the quality of life, but if our quality of life is increasing even more rapidly than GDP, then I don't see why some future intelligence explosion is necessary for us to achieve greater advances in the quality of life.
Again, it isn't an increase in intelligence that's growing the economy or increasing our quality of life today.
Ah, strong AI. I see that it's still the realm of crackpots. This stuff comes on Coast to Coast AM all the time. These folks can't get a paper published in a decent scientific publication so they form their own conference. I suppose it's interesting if you like science fiction.
Oh well, at least we're not getting paper copies of their proofs of Fermat's Last Theorem anymore.
I've never been a huge fan of the philosopher John Searle (though he was kind to my term paper as an undergraduate), but to quote him in a different context, 'this kind of stuff gives bullshit a bad name'. Would someone please tell me exactly what an 'intelligence' is? Is this going to be a robot? A computer program? Both? What does 'intelligence turning on itself' mean? Kurzweil talks about a computer program able to 'simulate' a human mind in 10 years; in what way? Will it go out on dates? Will it have emotional meltdowns? All the best work in cognitive science recently (Le Doux, D'Amasio, Ramachandran) has been on the importance of emotions to mentality and the importance of bodily states to emotions. If these 'intelligences' are to have minds, what kinds of emotional states are supposed to underly them? Given that they won't (unless Dr. Frankenstein is coming back) have human bodies, there's no interesting way they'll have human emotions or human minds (or anything approaching a simulation of either). Hke is right, this is nutjobbery dressed up.
hke: All of these people are crackpots?
JWFA: Yes, they did talk about the rights of AIs. You might be interested in philosopher Nick Bostrom's Ethical Principles in the Creation of Artificial Minds.
Stephan Johnson: I guess I didn't make it clear in the article, but many of the presenters made exactly your point about the importance of emotions. With regard to bodies, the idea of putting them in virtual space is that AIs can be given "bodies" with "senses" and the ability to move around much more easily than in the real world.
Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
State of the art AI in 2002: Uh, I think DragonVoice was OK. We had beaten Kasparov at chess. I think Google started suggesting spellings when you searched for stuff around that time or maybe a year or two after.
State of the art AI in 2007: I can't think of anything different in the realm of cutting edge AI from what was available in 2002. On the plus side, WiFi is standard in coffee shops now. That's not AI, but it is convenient. Oh, and the iPhone is cool. Again it's not AI related, just cool.
State of the art AI in 2012: According to this guy, we'll suddenly have true AIs. Despite the fact that AI has seemingly not advanced at all in the last five years.
Hmm, I guess someone really needs to step up the pace between now and then.
Ron: The list of people that you sent look like a who's who of weak AI research.
Weak AI posits that intelligent behaviour can be modeled and used by computers to solve problems.
Whereas strong AI believes that we can create machines that can think and are conscious. We've made a lot of progress in the area of weak AI due to the people on the list from the AAAI.
Was there a lot of overlap between this singularity conference and the folks from AAAI?
Sorry to be snippy, but I've encountered a lot of strange people who monopolize AI gatherings with their bizarre theories. It would be nice to focus on where we are making progress instead of on unlikely scenarios.
Thanks for the interesting article though.
The people who keep predicting "accelerating" technology and the total transformation of human life apparently either haven't read, or else have deliberately ignored, previous predictions of this sort. For example, consider F.M. Esfandiary's "Up-Wing Priorities" from 1981, which forecasts that mysterious, far-off year 2010:
http://www.box.net/shared/static/ay9lub60ha.pdf
That's not correct (per Brad DeLong). And it's also not necessarily relevant.
Here are world GDP values (in billions of 1990 dollars) every 20 years from 1900 to 2000, per Brad Delong:
Worl GDP, from Brad Delong
1900 1103
1920 1734
1940 3001
1960 6855
1980 18818
2000 41017
The annual percentage GDP growth values are therefore:
1900-1920: 1.023
1920-1940: 1.028
1940-1960: 1.042
1960-1980: 1.052
1980-2000: 1.040
One can see that they increased as the 20th century progressed.
More importantly, the simple assumption that GDP growth in the 21st century will be like GDP growth in the 20th century completely ignores the likely gains in computer power in the 21st century.
I've made rough calculations of the number of "human brain equivalents" (HBEs) added to the world population each year by personal computers. The HBEs are the product of the power of the computers times the number of computers. For example, in 1990 there were 17 million personal computers manufactured in the world. But the average computer was less than 1 millionth of the power of a human brain. So the number of HBEs added in 1990 was actually less than one(!).
Why economic growth will be spectacular
However, in 2025 I calculate more than 1 billion HBEs added. And more than 1 TRILLION added by 2033. The conclusion is that economic growth will be spectacularly larger in the 21st century than in the 20th century. In fact, I predict per-capita GDP in the year 2100 will be more than $10 million (roughly a factor of 1000 more than in 2007).
Economic growth in the 21st century
In fact, I expect world per-capita GDP growth to routinely be above 4 percent per year as early as 2020.
Oops. In case it wasn't obvious, those annual percentage GDP growth rates should have been:
1900-1920: 2.3
1920-1940: 2.8
1940-1960: 4.2
1960-1980: 5.2
1980-2000: 4.0
It was gratifying to find over 1,000 individuals, mostly from the Bay Area, willing to give up their weekend to discuss the risks and rewards of the coming Singularity. We at the World Transhumanist Association promote and discuss the Technological Singularity, longevity, Machine/Sentient-Beings Rights and more issues every day! Come join us at http://www.transhumanism.org
It seems entirely reasonable to expect that an integration of cumulative knowledge can produce an uber-humanoid when the processing power advances enough. This cyber-consultant may not be able feel emotion the same way as a normal human, but that does not seem to be necessary for amazing intellectual productivity gains to be realized. How often do real flesh and blood guru's get hired to provide advice and expertise without appearing to have ever left the library for anything but a star trek convention.
What does a great CEO or medical doctor know that can't be codified? Perception is the current hurdle being addressed and if that is mastered such tasks as financial analysis, medical prognosis, product engineering etc may become nothing but the embodiment of a warm breathing human mediator and the cold calculations of the uber-cyber-geek. That may seem scary until you realize how many medical errors are made a year and how much misallocation of resources stifles economic growth.
besides wikipedia, some good links can be found at
http://aima.cs.berkeley.edu/ai.html
also youtube has some good stuff, like Doug Lenat's presentation, computers vs common sense. http://www.youtube.com/watch?v=KTy601uiMcY
"...Who is to say there are not principles of biology and physics that we don't fully understand yet that will act like a brick wall on the development of information tech?" John
This is a good point to consider, but will intelligence be limited in the same way as physical laws? Some say processing power limitations can only be circumvented by nanotech and/or quantum computing. An intelligent agent may be unable to explain what a prop plane will feel like above the sound barrier (a physical limit) but knowledge often makes decisions easier. Having a large body of experience with flying objects behavior is not as useful as a law of aerodynamics. But a super AI may be limited in its ability to predict human behavior, like the explosive demand for tickle-me-elmos, because the AI's capacity to predict is limited by the subject matter, the chaotic nature of normal humans.
Here are some other questions to ponder...What ethical considerations should we apply to people who use the accumulated corpus of human knowledge to create a comprehensive uber-AI? What patent rights are reasonable for an advanced algorithm that can beat the stock market or look further down the iterative line of invention?
If AI is able to think exponentially and create what will that do to our notion of intellectual property rights?
What if the AI determines eugenics etc is the best social policy?
"That may seem scary until you realize how many medical errors are made a year and how much misallocation of resources stifles economic growth."
The only scary thing about that statement is the suggestion that somebody with more "intelligence" is going to allocate our resources more efficiently than we do for ourselves.
...I see this often, and I notice two common forms. One is from those who seem to think that the people who make economic decisions aren't properly motivated, and the other seems to think that those of us who make economic decisions aren't sufficiently qualified to do so.
Both may pay lip service to the invisible hand, but ultimately it's just that--lip service. Show me someone who thinks that an uber-intellect will outperform the invisible hand, and I'll show you an intellect that doesn't understand the invisible hand.
If the individual agent is able to make the best decisions within a distributed manner then it is highly likely that the uber-intellect will acknowledge that. The ability to buy cigarettes and porn does not have to be restricted by an uber-intellect unless it was given the authority to make policy decisions and decided so? and that may well be a scary form of government. Would you agree that an invisible hand is already limited by product availability, and social norms. AND these previous actions of the invisible hand have created a cumulative current state of affairs with positive and negative limitations? An uber-intellect may be able to create prosperity and subsequent increases in individual choice.
also I am not sure Adam Smith would necessarily understand our use of "invisible hand"
Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?)
Awwww! An electronic guardian angel! Ain't that special?
Opportunities for religious analogies here could be fun!
Too much fun...
I don't really know if creating an artificial intelligence modeled to emulate a human intelligence in all it's glory would be particularly useful. Our most useful automations may emulate and extend human abilities, say a fork-lift extends the power to lift, a computer extends the ability to calculate, perhaps more advanced computers might extend the ability to identify abstract relationships. But none of these things, to paraphrase an old marketing slogan, are a better human than humans.
I can see where it might be useful to emulate and extend particular characteristics of what we are pleased to call "human intelligence", but I'm not real sure what use some sort of 'bot (either physical or virtual) designed to mimic the full spectrum of human intelligence would be. We already have plenty of humans to do that.
Well, of course, there's always sex, I suppose...
"Would you agree that an invisible hand is already limited by product availability, and social norms."
No. I would agree that social norms, among other things, are a product of the invisible hand.
"If the individual agent is able to make the best decisions within a distributed manner then it is highly likely that the uber-intellect will acknowledge that."
Translation:
I, for one, welcome our new uber-intelligent overlords. I'd like to remind them that as a trusted commenter, I can be helpful in rounding up others to toil...
Just kidding.
"Opportunities for religious analogies here could be fun!"
I've already thrown too many rocks around here despite my own glass house, but the parallels to various religions are compelling.
...among them, there's this sense of inevitability, a moment of truth fantasy where non-believers are forced to face their wrongness, etc., etc.
Late to the party, I know.
But I must say it.
Will Ray Kurzweil shut the f*ck up? This guy's is a book mill, nothing more. Let me be perfectly clear, there will be no singularity before 2050. None, zip, nada. Technology will continue to expand exponentially, yes. But an artificial intelligence will not spawn itself out of it. God I'd [almost] give my left nut to debate this guy in public just once.
They can't even make a damned chat bot that can't be bamboozled in less then four seconds for god's sake. He still has his dumb-assed chat bot on his site.
Transcript of my conversation with "Ramona"
Ramona: Hi, I'm Ramona. What can I call you?
User: whatever you want
Ramona: Hi you want. Have any books you would recommend?
User: I really prefer tv, have you seen the Venture Brothers? Finest piece of culture in the last decade.
Ramona: I just finished one called The Moon Is A Harsh Mistress by Heinlein. Have you read it?
User: You actually read a book?
Ramona: I thought it was great. What do you think?
User: I never said I read it.
Ramona: What an educating collection of parables. It makes becoming debt free seem so simple.
User: say what?
Ramona: What else like that have you read?
==
And so it goes. Believe me, if I want intelligence, be it artificial or not, Kurzweil is the last place I'm going.
Ron:
First, thanks for the response. Second, I'm a big fan of your reporting. But, that the participants (I actually planned to attend but couldn't get away) are aware of the somatic requirements on emotions and the emotional requirements on mentality is belied by the claim that 'intelligences' (again, that word) can be put in 'virtual space' with (your quotes) "bodies" and "senses". The problem is that 1) I don't have the foggiest idea of what it is to put an 'intelligence' in 'virtual space', and I've yet to hear a coherent statement of it. Bu more importantly, the whole point is that it can't be "bodies" (in quotes), it needs to be bodies (no quotes). And with different bodies, even if we could get a grip on 'intelligences', we would have absolutely no reason to think we had a simulation of human intelligence (as opposed to God knows what). Why not go John Mcarthy right away and say that thermostats are 'intelligences' with just very rudimentary "bodies"? Under what conditions will we have a "body" that counts? To my mind, that "body" will have to have emotional meltdowns if it's to count as a simulation of anything human and I have yet so see a plan that's anywhere within telescope distance.
If Bailey went to a conference of environmentalists spouting doomsday predictions, he would not hesitate to note the less-the-stellar record of such predictions in the past. But the similarly dubious history of AI hype for some reason gets a free pass.
Also, I doubt Bailey would be impressed by somebody linking to a list of the environmentalists' names as evidence they're "not crackpots."
AI is one of the most projected development, and conversely, the most failed development. Good lord, they finally stopped talking about AI systems in the eighties and nineties and started referring to them as "expert systems" (See Stephan Johnson's thermostat comment above). And I nearly busted a vein when I read about the Storm Worm's "computing power". No one with any background at all in real AI research should ever refer to computing power in the same breath as artificial intelligence. There is something innate about "intelligence" that we simply don't understand yet. And making faster and more powerful computers isn't bringing that to bear. Some of the finest AI researchers in the field (that no one's probably ever heard of) will tell you that boosting a dogs computing "power" just makes it a dog that thinks really fast. It's still a dog. Or, as the saying goes: "A computer doesn't do anything smart. It does something stupid fast."
Computers aren't any smarter than they were fifty years ago. They're just faster so they can make thousands of iterations faster and therefore "appear" intelligent-- but in reality, they're doing remedial tasks which don't require "intelligence" but only iteration. *bam* wall *bam* wall *bam* wall *bam* wall... *no bam* no wall-- good path, repeat. Heck, one of the primary researchers in face recognition technology went on to do art projects about "race awareness".
There was a great series on PBS back in the nineties about the lack of progress of AI, and how everytime researchers believed they had a "solution" we got smacked back into reality. Can't remember what the series was called, I'll have to look it up.
Ken Silber: Of course, I would be skeptical of the environmentalist doomsters--they've nearly always been wrong about impending doom and they generally recommend wrongheaded economic policies to "solve" non-existent catastrophes.
With regard to my lack of skepticism with regard to AI boosters, you've got me dead to rights. Why? Partially because I think techno-optimists are much less dangerous to the future of humanity than are ideological environmental doomsters, thus I think I need to be less wary about reporting their proposals and visions. Let me clearer, I think that techno-optimists are vital to the future of humanity. It is true that their visions may outrun their science at times, but I admit that I enjoy their enthusiasm. After all, Feynman's lecture "There's Plenty of Room at the Bottom" wasn't taken seriously by most material scientists for decades. Now we have the National Nanotechnology Initiative and billions in private investment in nano. But your point is well-taken.
Stephan Johnson: I'm curious what you think of the results of IBM's Joshua Blue project? I am also curious about what you think about Goertzel's Novamente Cognition Engine? At the Singularity Summit, he suggested that his virtual AI pets would be ready for release into virtual environments like Second Life sometime next year. We'll see.
I'll be happy to stand in for Ray Kurzweil. 😉
You say, "Technology will continue to expand exponentially, yes. But an artificial intelligence will not spawn itself out of it."
But your charactization is so broad it's virtually a parody of what Ray Kurzweil has actually written and said. Here is an excerpt from his website (published in 2001):
Do you dispute that, somewhere between 2020 and 2050 (i.e., a rough translation of "a few decades" relative to 2001), computer intelligence will exceed human intelligence?
Do you dispute that, if computers even equal human intelligence, that they can build other computers that are even more intelligent?
Do you dispute that such cycles of more-intelligent computers will have intelligence doubling times of a few years or less (e.g. a machine that equals human intelligence will lead to a machine that is 2 times as intelligent in a few years or less, then 4 times, then 8 times, then 16 times, etc.)?
Do you dispute that the creation each year of literally billions of computers that equal or exceed human intelligence would have such a profound technological effect that the future would be rendered essentially unpredictable?
Do you dispute that biological and non-biological intelligence will merge?
Partially because I think techno-optimists are much less dangerous to the future of humanity than are ideological environmental doomsters, thus I think I need to be less wary about reporting their proposals and visions.
No disagreement there. Which is more dangerous, a bunch of policy wonks deciding how they're going to change everyones lives, or a bunch of nerds with pocket protectors musing about the future?
I choose life.
As far as Joshua Blue project goes, I'll believe it when I see it, and not five seconds before. My personal "singularity" or moment of clarity was having children. When I watched my child develop, I realized just how unbelievable far off we were in developing any kind of real cognitive AI. The fact about intelligence is that won't be achieved using machines that add numbers by comparing 1's and 0's.
The "Blue" project's goals are, in my opinion, hugely ambitious, bordering on laughable. My guess is they're underestimating the cognitive ability of your average three year old.
My opinion of most AI researchers is that they don't really grasp the basic meaning of "cognition". Or at minimum, they don't really consider its depth-- in humans.
For a fifteen month old to see some squiggly blue lines drawn with pastels in a story book and point and say "wa-wa" is light years beyond anything that even the most powerful computing devices on the planet can muster. Sure, we can take a system, train the daylights out of it and it might know real water when it sees it. But a contextual impressionist artists depiction of water? Not lately. The human brain is a truly multidimensional reasoning machine that's far beyond mans own ability to perceive and conceive of itself.
My opinion about any AI system we create is that it won't be using any kind of traditional computing-- it will probably be something using materials and processes not yet developed, maybe organics and protien compounds-- and when it does work, we won't fully understand how or why. Basically, it'll be a brain.
Do you dispute that, somewhere between 2020 and 2050 (i.e., a rough translation of "a few decades" relative to 2001), computer intelligence will exceed human intelligence?
Not only yes, do I dispute it, but hell yes with highly polished brass knobs on. There will be no computing intelligence which will rival human intelligence in that timeframe. I also postulate that there will be no computing intelligence rivaling human intelligence using traditional binary computing. None. To be specific, sir, there will be no... NO computer on the planet that will pass the Turing test in the next fifty years.
Do you dispute that, if computers even equal human intelligence, that they can build other computers that are even more intelligent?
to have the first, you must have the second, we won't have the first.
Yeah, I've read his website. Ray Kurzweil is a pop-futurist who makes sexy and sweeping statements which get media attention.
I'm not sure why you keep asking me if I dispute Ray Kurzweil's proposals. Yes, I dispute them, hence my "I dispute what Ray Kurzweil says" attitude. You haven't presented me with any evidence, just questions whether I dispute if we'll have an intelligence on par with human intelligence-- and then follow up with questions about the dispute I'd have if this intelligence were achieved?
FYI, Kurzweil makes leaps. That's his schtick. Computing technology is growing by leaps and bounds, therefore at some point *poof* out pops a super-human-like intelligence.
Here's a good summary of Kurzweils flawed thinking:
Full post here
Basically, it's all leaps and bounds all the time with Ray Kurzweil. Nothing wrong with being optimistic, but geez.
Processing power does influence what it considered intelligent behavior because within a certain time and space any task that will be judged intelligent (turing test) is going to be limited by mental resources available to evaluate associations and determine actions within a dynamic environment. The real world is fast paced and real world decisions exist within a quasi-infinite problem space. Chess itself requires incredible processing power in order to span even a segment of the state space and is not dynamic like most tasks.
What does a dog with super computational ability do with itself, who knows, a cognitive scientist might argue it depends on what part of the brain this super dog got upgraded, IF the dog is capable of reflecting on its thinking it may be able to do more than just calculate the distance squared to the next dogs ass. What is different between the human brain and the dog brain?size, structures, innate algorithms?
Processing power is necessary for human-like intelligence, but not sufficient.
Well, what makes you think that the Turing test is a legitimate test of intelligence?
Do you think it takes intelligence to:
1) Win at chess?
2) Fly an airplane?
3) Drive a car?
4) Take orders and/or cook food at a fast food resturant?
Do you think Alex the Parrot (who unfortunately recently passed away) was intelligent?
Fred the Parrot...a remarkable bird
I'm asking a lot of questions because you didn't dispute what Ray Kurzweil has said, so much as dispute a caricaturization of what Ray Kurzweil has said.
BTW, I notice you didn't answer my question about whether you dispute that biological and non-biological intelligence will merge.
Do you dispute that, too? Or do you accept that biological and non-biological intelligence will merge?
Processing power is necessary for human-like intelligence, but not sufficient.
I'm not sure about this. I believe that it's disputed as to how fast the brain actually processes. Some have postulated that the brain isn't about speed (I'm in that camp) but how it processes information. You can make the argument that if you give a computer all the possible answers, stick them in memory, then processing speed will be paramount in pulling up the "right" answer. But where researchers are repeatedly flummoxed is defining what the "right" answer is.
This discussion here.
Point being that even if you create a computer with 100 billion processors, one for each neuron in the brain- we don't suddenly have a smart computer.
It is actually a component of human intelligence that we're wrong about things. That's what makes us human, and that's what partially makes us intelligent. A computer will never get a math equation wrong, but it will never grasp meaning.
It was once pointed out the computers are very good at doing things humanity has invented or developed "recently". Solving math problems, driving cars-- industrian and mechanical tasks. But computers are very poor at doing things we've done since the beginning of human existence: language, image recognition-- deriving meaning from imagery, understanding, emotional cognition.
The biggest problem with the discussion of AI is the conflation of issues. People see the huge advances in computing technology, and conflate that into an approaching ai. It's not. When I was a kid and got into computer animation when all anyone had in their living room was an eight bit computer with 16 colors, the thought of doing organic shapes-- or humans was considered the stuff of science fiction. Boxes, cones, trianges-- that was where computers would excel. Fast forward and they're doing computer generated animation that's downright unbelievable. But there's no intelligence. There's just the same computer doing the exact same thing it was doing with primitives animation, but with way more memory, and way more processing power.
Oops. That link demands a subscription. Here's another to Alex the Parrot:
Alex the Parrot
Was Alex intelligent?
Well, what makes you think that the Turing test is a legitimate test of intelligence?
You get a computer to understand language.. really understand it and you'll get my attention. There isn't a single computer on the planet that understands language. There are bots that understand very restricted responses within very strict subject boundaries. Again known as "expert systems". Stay behind the roped area please. No flash photography, and no hair questions.
Do you think it takes intelligence to:
1) Win at chess?
2) Fly an airplane?
3) Drive a car?
No. That takes processing power. Iterations.
I believe that with more iterations (processing power) we could probably start to fly planes in almost all situations without pilots (sorry pilots, nothing personal). This, in my opinion, still falls under the umbrella of "expert systems".
Do you think Alex the Parrot (who unfortunately recently passed away) was intelligent?
I've never heard of Alex until now, but Alex would be as intelligent as a parrot is. I believe they have some capacity for limited language, mostly through nuance and tone-- which is already more than even the biggest computer can do. I believe that animals have a form of intelligence beyond computing intelligence. But it's animal intelligence, not human intelligence.
?!! What, a retarded five year old? Oh, and this brings us around to the discussion of Koko the gorilla. Koko's researchers have been roundly criticized for cherry picking their data on Koko, throwing out results that were unfavorable- citing Koko being a pill-- or being uncooperative-- and including favorable results only. But regardless, animals are as intelligent as animals are. We're talking about a device which adds numbers by comparing 1's and 0's. Moving on...
BTW, I notice you didn't answer my question about whether you dispute that biological and non-biological intelligence will merge.
Sorry, not out of evasion, but omission.
I'm not sure how this will be defined.
Take me, I'm a biological intelligence (disputed at times, yes, but let's just accept this for the purpose of the discussion). I use google. My hand is touching my keyboard. Am I augmenting my intelligence with traditional computing-- using that computing power to call up large amounts of data and facts quickly, thus allowing me to more succinctly make my points? I can't answer that really. I have my doubts. Point being, you can throw data at people-- it's how they perceive, understand and process the data that defines intelligence. Sure, if you hooked up a 13 function calculator to my brain, I'd never suffer from math anxiety again. But is that an augmentation of intelligence? I guess, but the computer isn't the intelligence, the intelligence is the intelligence, the computer is a tool.
Look at it this way, is the mechanic a better mechanic because he has a wrench in his hand? He's a more effective mechanic, but he's still only as good a mechanic as he was five minutes ago. He could also sit on a chair at the back of the room, shouting orders to a dope-addled kid with a wrench in his hand. Giving man tools didn't make him more intelligent. Man was intelligent, so he created tools to make man more effective with his intelligence.
Was Alex intelligent?
Alex is intelligent according to a measurement of intelligence that is satisfied by an ability to "combine the different labels in his vocabulary to request, confuse, or categorize over 100 different things," Pepperberg said. "So he understood labels from materials, shapes, colors, things like that."
definition of intelligence on wikipedia:
The definition of intelligence has long been a matter of controversy.
At least two major "consensus" definitions of intelligence have been proposed. First, from Intelligence: Knowns and Unknowns, a report of a task force convened by the American Psychological Association in 1995:
Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen somewhat different definitions.[1]
A second definition of intelligence comes from "Mainstream Science on Intelligence", which was signed by 52 intelligence researchers in 1994:
a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-"catching on", "making sense" of things, or "figuring out" what to do.[2]
http://en.wikipedia.org/wiki/Intelligence
Politic:
Exactly, so is a thermostat intelligent?
I say No.
Per the posts above, flying a plane with a computer program is basically a thermostat.
If nose dips below x aspect, pull up a little.
If tail goes beyond y aspect, yaw left or right.
If wing dips below or above x angle, bank left or right.
If turbulance detected (vibration x out of tolerance) send signal to nearest tower for optimum altitude, ascend or descend to determined altitude or until within vibration tolerance).
This isn't intelligence, it's a thermostat. Computers are programmable thermostats.
not wanting to sound like Ramona, but a thermostat is intelligent if the definition of intelligence is an objects ability to react to stimuli in a manner satisfying some task goal, ie measure current temperature and adjust processes in order to adjust temperature to that of a goal state (ex. beautiful 65 degree WI evening).
but not intelligent like a retarded 5 year old.
Is my ability to respond to your questions more than what could be achieved by an advanced conversationalist expert system? How would you test me to see if I am a human instead of a DARPA cyber-turing-bot seeking out AI skeptics?
If hungry, eat
If tired, sleep
If hot, remove hand
If pleasurable, smile
If pleasurable
And contested (turbulence detected)
Display poker face
send signal to nearest person for optimum attitude, ascend or descend to determined attitude or until within optimal comfort zone.
The Turing test doesn't test "understanding" language...it tests a computer faking like it's human. If one asked a computer during a Turing test, "What did you feel like on your first date?," and the computer responded, "I never went on a date. I'm a computer, you idiot!" (Has a bit of a Gregory House personality...)
...it would flunk the Turing test.
But regarding your "understanding" language...there are already computers that translate web pages between various languages. Computers are even beginning to be able to translate between spoken languages. But of course, that's just thermostatic intelligence, according to you, right?
So if a computer could translate on-the-fly between virtually all the spoken languages of the world...which is better than essentially 100 percent of the human population...but it wouldn't be "intelligent."
Right?
Mr. Bailey (sorry for the earlier informality):
I think the Joshua Blue Project is one step in the right direction by recognizing that it's real situational abilities that AI ought to be after, but I still think that a somatic undeprinning is missing. Is this Joshua Blue going to have a body? Will it be nourished? I think this really basic (and obviously too simplistic for the Humanity Institute people) question is actually the important one. In this regard, it's Rodney Brooks who seems to have his head on square. Let's build a robot and see what happens. It may well be that we get something interesting, but whether you want to call it an AI or simulation of human intelligence (if it's a really cool robot), at that point, would be a purely semantic dispute.
And let's not forget about evolution. There's a not totally discountible argument (from Ruth Millikan) that the evolutionary facts of our mental states are an essential part of their content. Since no AI is the product of natural selection (or, at least, not the kind that produced us), why think their mental states have the same contents as ours?
As for Novamente, that sounds like the old Encyclopedia project out of UT Austin. The way forward for AI is ecologically driven, where we just try to get something that can physically navigate a real environment (a la Brooks) and let that drive the computation. I think Brooks would be happy with a competent sillicon cockroach, and I'd be impressed if he made one.
Paul,
To discuss any subject with you I need your definite answer to several questions:
1. Is the human brain (while body) a finite system, which may be analyzed in general? (YES or NO)
2. Is it possible existing of an omnipotent system? Omnipotent from any point of view, not omnipotent as a synonym to very powerful; e.g., it can create a stone, which it cannot pick up. (YES or NO)
3. Is it possible to forecast future forever? I.e. have infinite memory and infinite intellectual (and computational) power. (YES or NO)
"by an advanced conversationalist expert system" I mean a CES with ADD and questionable social skills..
"computers are very poor at doing things we've done since the beginning of human existence: language, image recognition-- deriving meaning from imagery, understanding, emotional cognition"
computers are great at language, they are programmed with various levels of languages that must communicate between an OS, various software apps, a user input, external computers in the network, all with a thermostat like awareness of processing capacity. They are not good at human language and the hodge-podge of associations/connotations with which we relate our language to individual experiences/perceptions in the real world.
?besides how often have you tried to explain abstract terms like love with little success.
?technology is making big advances in image recognition, not necessarily via the same method as human sight,?besides I doubt you would suggest visual imagery is necessary for intelligence, unless you wanted to be bombarded by angry audiomails from visually-impaired bloggers. Emotional cognition?I don't understand myself let alone other people, maybe you know a genius shrink you can put me in touch with.
Objective Art? I have nodded several times when a pretty art student explained the meaning of some "contextual impressionist art" only to be reminded of unique perspective and an ego's willingness to defer to the id.
satisfice...satisfice?satisfice
"Kurzweil talks about a computer program able to 'simulate' a human mind in 10 years; in what way? Will it go out on dates? Will it have emotional meltdowns? All the best work in cognitive science recently (Le Doux, D'Amasio, Ramachandran) has been on the importance of emotions to mentality and the importance of bodily states to emotions." Stephan Johnson
I agree that 10 years is unlikely to produce the technology required to comprehensively simulate the human condition with regard to emotion and feel. But is one human mind ever able to simulate another to any certain degree? And what about cognitive scientists who stimulate a portion of the brain with electrodes and the subject reports various moods and memory recalls?what are dreams, flashbacks and the ghost limb phenomenon of amputees. Is all past experience relevant to every new conscious experience? Besides it would be a troublesome hurdle if AI was expected to simulate the 13 year old female in order to gain respectability.
Mark Bahner
Are you arguing for a Chinese Room perspective or against it? My novice understanding of current translation software would cause me to consider it little more than a set of word exchanges, with some syntax rules. Ultimately it is unable to convey meaning between complicated sentences and subtle uses of semantic meaning, with any high degree of accuracy. What about discussion context, semantic drift, and measurements of the "fusion of horizons."
probably all quibbles over differences between strong and weak AI again.
testin, I don't know about Paul but I am not all together clear on what your asking in your questions.
Do you dispute that, somewhere between 2020 and 2050 (i.e., a rough translation of "a few decades" relative to 2001), computer intelligence will exceed human intelligence?
What does "intelligence" mean? I still haven't seen a robot that I would consider as spatially intelligent as a dog or a cat. Frankly, most of the robots I have seen are much worse at moving around in a space than even a bug. On the other hand, I think that you can use a computer to do a lot more calculations per second than you could do with a bug's brain. So, while I'm sure that computers have more 'intelligence' in the sense of raw potential, without the software to go with the hardware, it all goes to waste.
In any event, we're rapidly approaching the point of diminishing returns in computer hardware. Take this example. Go into Google and type 299792458m/s / 1cm. Google will tell you that it's equal to 30GHz. 299792458m/s is the speed of light. 1cm is a reasonable chip size. It's not possible for something to get from one side of a chip to another in less than the time needed for 30GHz clock cycle. We're already within an order of magnitude of how fast CPUs will ever be able to cycle without breaking the light barrier.
On the size front, we do still have a fair bit of room for improvement left. The next Intel Core 2 will be a 45 nm chip, and silicon atoms are 0.11 nanometers large. So, we could improve it by 100 fold.
Nevertheless, that kind of improvement is just not fundamentally enough to simulate the world at anything like the level detail needed to simulate the brain by brute force. Look at the PlayStation 3 or Xbox 360. Those things are very sophisticated, and use a lot of shortcuts to make a visually realistic world without having to do all the math of simulating the real world in all its complexity. However, the level of resolution in those simulations is much, much worse than 100x larger than simulating atom sized interactions. Therefore, we aren't going to be able to do an AI the "easy way" by just simulating a physical brain carte blanc.
Instead, we're going to have to figure out which are the important parts of the brain to even bother simulating. In other words, this is a software problem, not a hardware problem, because even if we get computers down to the atomic level, we won't be able to just simulate things naively.
If you know anything about software development, you know that it's hard and that throwing man hours at problem won't solve it, because usually the hardest part of making software is just determining what kind of software you need to make anyway.
Anyhow, the whole singularity thing just strikes me as ridiculous. It's like people in the 70s say, "Well, we went from no space travel to moon landing in the decade of the 60s, so we should get starships sometime in the next 30 years, right?" Except that by the end of the 60s we had already hit the fundamental limitation of all known chemical rocket fuels, and until we get a new rocket fuel, we'll never make more than incremental improvements in space travel.
If you believe computers can do a decent job translating now, you are clearly monolingual. Babelfish is shite, only slightly better than using a foreign language dictionary yourself. (Actually, in many ways it's worse.)
In the event that computers did get good at translating though, it still wouldn't mean much. There's a world of difference between using grammar pattern and a lookup table to translate something and creating something truly original and appropriate spontaneously. Doing one doesn't help you do the other.
not wanting to sound like Ramona, but a thermostat is intelligent if the definition of intelligence is an objects ability to react to stimuli in a manner satisfying some task goal,
Politic:
Then AI was invented several hundred years ago, to be sure.
It's not intelligence. The air reacts to stimuli: heat, cold, movement. A thermostat is as simple as a piece of metal which contracts as the temperature goes down, expands as it goes up.
Is my ability to respond to your questions more than what could be achieved by an advanced conversationalist expert system? How would you test me to see if I am a human instead of a DARPA cyber-turing-bot seeking out AI skeptics?
Well, yeah. On the level, though, I was beginning to wonder if Mark Bahner was a chatbot, because he used the classic feature of answering questions with more questions, one of the central tactics of the original Eliza program. (My first disappointment in a very long line with AI chatbots).
Seriously though. We need to get some perspective back, here. What many people forget is that there were technologies invented well over a hundred years ago which were by all accounts every bit as revolutionary as the digital computer. But we don't look at these technologies and call them aritificial intelligence-- or even intelligent. But I mean if you guys want to start determining pressing the '=' button on your calculator thus receiving the correct answer to previously entered formulae, then I guess we're done, Kurzweil is right, debate finished. But I have a feeling that most of us can agree that the calculator isn't intelligent-- your computer on your desk- exactly the same as your calculator, but with way more memory and faster.
Mark Bahner:
The Turing test doesn't test "understanding" language...it tests a computer faking like it's human. If one asked a computer during a Turing test, "What did you feel like on your first date?," and the computer responded, "I never went on a date. I'm a computer, you idiot!" (Has a bit of a Gregory House personality...)
...it would flunk the Turing test.
Not necessarily. If you actually had a computer that could "fake" a human conversation effectively enough to trick a human investigator, it would be elementary to program the computer to not reveal it was a computer.
To wit: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human.
The Turing test is brilliant in its simplicity. There are also no real provisions that either the computer or the person has to tell the truth, about being human, or being a computer. The human counterpart may also report that he's a computer. That's the point. Chatbots are fantastically easy to boggle because they want you to play ball. They actually depend on a human interactor which will stay on message and on target.
Computers are even beginning to be able to translate between spoken languages. But of course, that's just thermostatic intelligence, according to you, right?
Yep. It's a calculator comparing 1's and 0's. It has no cognition of what the language means. But what's worse, is computer programs, fifty years after they declared this would be "easy to do", are still struggling at even this seemingly simple task. We use some of these programs in our company. They're mediocre to horrible. Their primary use is to get the gist of meaning. Google translate spanish to english:
Text: Mi casa es su casa.
Translation: my house is its house
Hoo boy, we got a long way to go.
Another example. There's a robotic lawnmower that "learns" where the borders of your lawn are (with mixed results, so say the reviews) based on a wire you run round your lawn. This is thermostatic "intelligence" at its ultimate. Making a computer "remember" where to go is fantastically simple. I can't go out to the lawnmower and say "Yeah, great job, but it's be great if you could clean the pool, too".
So if a computer could translate on-the-fly between virtually all the spoken languages of the world...which is better than essentially 100 percent of the human population...but it wouldn't be "intelligent."
Nope. Why? Because it's simply rule based. Adding more and more rules and getting something nifty does not an artificial intelligence make. This is precisely what burned AI researchers in the sixties. They figured if they could add enough rules, hey presto, out would pop a singularity. They were so very wrong. Because what they were immediately frustrated with was that the deeper you looked at the so-called rules that humans use to "reason", the deeper and more complex... and enigmatic they became.
One researcher lamented (I'm paraphrasing):
We tried to get a computer to understand a simple childs story about a kid that received two the same gifts and was therefore disappointed. We became frustrated with all the exceptions that we never thought of before the project started. But even if we could tell the computer that getting two of something was "bad", we realized that getting two of something wasn't bad, as in maybe getting two chocolate chip cookies, or two dollar bills. It's the contextual cognitive response that has never, ever been duplicated or even approached with modern computing. The problem here, Mike, is that you appear to be firmly in the camp of rules based AI.
I was doing some historical research on AI a few years ago and I discovered that one of the top researchers of AI had basically given up. He had been replaced by the "rules based" ai researchers who, after about a decade of work came up with a vacuum cleaner which could automatically clean your floors. It's now known as the Roomba. It's borne of a camp which believes that AI can be produced with a top-down approach.
Some researchers have basically abandoned the idea that computers can become intelligent, we just need better programming. The rest fall into the idea that if we just write better software, it'll all come together. I'm with the first camp.
Remember, one of the basic elements of intelligence is the ability for the entity to learn, without reprogramming or adding new hardware. My coffee pot "knows" when to turn it self off when a certain amount of idle time has been reached. Call Ray Kurzweil, it's here!!!
computers are great at language, they are programmed with various levels of languages that must communicate between an OS, various software apps, a user input, external computers in the network, all with a thermostat like awareness of processing capacity. They are not good at human language and the hodge-podge of associations/connotations with which we relate our language to individual experiences/perceptions in the real world.
I'm not sure where you're going with this. Computers are lousy at language, and have been frustrating us software developers for years. Did I mention my background is software development? Better get that out there early.
I used to do some teaching of computer languages to novice programmers, and the first thing you pound into their head is how absolutely horrible computers are at second-guessing what you really "meant".
?technology is making big advances in image recognition, not necessarily via the same method as human sight,?besides I doubt you would suggest visual imagery is necessary for intelligence, unless you wanted to be bombarded by angry audiomails from visually-impaired bloggers.
Of course I wouldn't. Everyone who's spent more than five minutes on this subject knows that blind people are intelligent. So are deaf, and blind people. A conundrum for many AI researchers who are convinced that once their calculators can see and hear, they'll be set.
Emotional cognition?I don't understand myself let alone other people, maybe you know a genius shrink you can put me in touch with.
I'll send you my old four function calculator and you'll be set. Actually, you'll need more intelligence than that. I'll send you two. 😉
Carl:
If you believe computers can do a decent job translating now, you are clearly monolingual. Babelfish is shite, only slightly better than using a foreign language dictionary yourself. (Actually, in many ways it's worse.)
It is worse. Way worse. I can get the accurate translation of a word with the dictionary. Put two words together into that abomination called Babelfish and you've just told your patient to use a banana for an enima.
"not wanting to sound like Ramona, but a thermostat is intelligent if the definition of intelligence is an objects ability to react to stimuli in a manner satisfying some task goal,"
I agree that was fairly useless, except that it underlies the importance of agreeing on a definition.
"But I have a feeling that most of us can agree that the calculator isn't intelligent-- your computer on your desk- exactly the same as your calculator, but with way more memory and faster."
I agree the calculator is not intelligent by any meaningful definition, but why do you consider the calculator and pc to be the same, isn't the pc also able to store/manipulate/compare new information, perform operations with various inputs, monitor itself, "learn" by neural net type functions, and even make adjustments to its own software?
"I can't go out to the lawnmower and say "Yeah, great job, but it's be great if you could clean the pool, too"."
you can't ask a person to run 100mph or shoot flame from their nose either, but I think your point is once again that the lawnmower is not a sophisticated piece of AI, I agree again, but not a refutation of the possibility of a responsive agent. Many animals unable to fly doesn't mean birds don't exist.
"Remember, one of the basic elements of intelligence is the ability for the entity to learn, without reprogramming or adding new hardware.
When the brain makes new connections, prunes, and decays? (1 and 0's OR activation ratios)? is that analogous to reprogramming or hardware changes? Or to software that changes rules and updates data sets? Besides where did the second assertion come from?
"A conundrum for many AI researchers who are convinced that once their calculators can see and hear, they'll be set."
not really, it is a belief that some type of input, especially a human one, is necessary for monitoring the environment and the way it reacts to the agents action? necessary like wheels on a bus, but wheels are not sufficient for being a bus.
"I used to do some teaching of computer languages to novice programmers, and the first thing you pound into their head is how absolutely horrible computers are at second-guessing what you really "meant"."
If you put the decimal in the wrong place and expect the computer to understand that your intention was to have the value of Yen be updated every 10 seconds according to market conditions?then yeah the current software is not great at reading minds, but improvements are being made in bug detection, via AI tools.
How good are humans at understanding one another when detailed accounts are compared? It seems as though human communication is dependent on ambiguous details being summarized in the aggregate. Is this not the advantage of fuzzy logic?it circumvents bottlenecks due to mismatched identities. An AI needs context and empathy.
I am not a professional software developer and would welcome a better example for clarification, without the head pounding please.
"If you know anything about software development, you know that it's hard and that throwing man hours at problem won't solve it, because usually the hardest part of making software is just determining what kind of software you need to make anyway." Carl
Out of curiosity?what would happen if a software engineer and philosopher were codified into an expert system and devoted to self analysis and optimization? In essence a software agent that knows what type of questions to ask and how to program itself...before being primed for the external world.
I'll sleep on that.
A intellegnce is illusion.Human spirit is so strong,thousands new idea may arise in modern science. Donot afraid, man is greater than this A intellence.
Futurists have habit to predicate bombardment and fear to common man, that one of the trick. From ancient time soothsayers, occultists are using same trick. so donot give too much importance to this soothsayer.and live life joyfully.creativily
Before the machine can fix itself, you must create the program that fixes itself. This is a mind bendingly hard problem. Like most software development issues, it begins with a vague desire ("I should computerize inventory and such", "We should make a computer that self-introspects") before getting bogged down when the details need to be specified ("OK, so the item column of the DB will have the following properties?", "OK, the mental model object needs to identify the creativity quotient of each item in relation to the practicality index?").
I'm on my way to pointing out that Paul (and potentially others...I'm too lazy to keep track of everyone's nuanced positions) seems to consider computers "unintelligent," simply by definition.
Paul, and everyone else commenting, seems to agree with me that Alex the Parrot (R.I.P.) possessed "intelligence." But Paul says that computers possess no intelligence.
Yet, if I gave Alex the Parrot a set of regulations on air pollution from coke ovens (used for steel manufacturing) that was in Chinese, and Alex the Parrot did even a somewhat reasonable job in translating those regulations into English, I think everyone here would say, "That is one #&*^ smart parrot!"
Well, here are regulations (in Chinese) for coke oven emissions in China:
Coke oven regulations, in Chinese
Can anyone commenting read them in the original Chinese? Now show the page to a parrot, and see if he or she can read them to you in English. Everyone commenting seems to agree that humans are intelligent, and that Alex the Parrot was intelligent. But when it comes to the problem of translating these regulations into English, how many humans or parrots can do so?
And here is the Google translation into English:
Google translation into English
As support of my analysis of probable world per-capita GDP growth in the 21st century, I made calculations of the "human brain equivalents" (HBEs) added every year by personal computers. This calculation was the straight-forward product of the number of personal computers produced worldwide, times the number of instructions per second of each personal computer (where the number of instructions per second calculations and projections were Ray Kurzweil's values per $1000 of computer).
Why economic growth in the 21st century will be spectacular
By my calculations, only one(!) HBE was added to the world population in 1993. But by my projections (based on Ray Kurzweil's projections) approximately 1 billion HBEs will be added in 2025, one *trillion* HBEs will be added in 2033, and one *quadrillion* HBEs will be added in 2040.
According to Paul (please correct me if I'm wrong), zero human brain equivalents were added in 1993, zero will be added in 2025, zero will be added in 2033, and zero will be added in 2040. Because computers aren't intelligent. But when it comes to predicting the future--e.g., world per-capita GDP growth in the 21st century--I'm willing to bet that my calculation will be more useful.
Summary: When it comes to predicting the future, rather than defining computers as unintelligent (possessing zero intelligence), a more useful way of assessing computers relative to human brains would be to simply compare the number of instructions per second processed. If this method of assessing the number of "human brain equivalents" ("HBEs") added to the human population every year is used, the most logical conclusion is that a Singularity-a time of such rapid technological change that the future can't be predicted--will occur before 2050.
You seem to be implying that the Turing Test could be "played" with the human pretending to be a computer. But that's not a part of any Turing Test of which I'm aware. The Turing Test is predicated only on both computer and human claiming to be human.
If the Turing Test allowed a human to pretend he or she was a computer, then he or she could simply answer every single question with, "Syntax error."
I thought this was an was an excellent article in Scientific American's "Exploring Intelligence" special edition:
I agree the calculator is not intelligent by any meaningful definition, but why do you consider the calculator and pc to be the same, isn't the pc also able to store/manipulate/compare new information, perform operations with various inputs, monitor itself, "learn" by neural net type functions, and even make adjustments to its own software?
Uhm, yeah, if you start really loosening the definition of "learn".
The point being, that we're talking about intelligence here. "Intelligence". I used to write "intelligent" programs. But because they were based upon rules set forth by... me, they were no more intelligent than I was. Ie, I had to PRETHINK all the rules and program them in. IN the end, it was no smarter (in its very myopic task) than I was. Maybe my not-so literary brain doesn't work well enough in this setting to explain my point. If I have to PRE program a device with all the possible combinations, that's not intelligence. Even if you program a kind of dynamic response and 'learning', it still not intelligence, it's a device which will behave predictably depending on a certain meta criteria.
you can't ask a person to run 100mph or shoot flame from their nose either, but I think your point is once again that the lawnmower is not a sophisticated piece of AI, I agree again, but not a refutation of the possibility of a responsive agent. Many animals unable to fly doesn't mean birds don't exist.
I think you've finally accidentally made my point. Yes you can. I can ask a person to run 100mph, or shoot flame from their nose, and if they're intelligent, they'll figure out a way to do it. Why? Because the human being doesn't operate on a preprogrammed set of rules. The intelligent human will make it happen-- they will create something from nothing. When people were on foot, they could only go as fast as they could run. But because of human intelligence, we were able to domesticate animals, invent carriages, motor vehicles, planes, rockets.
then yeah the current software is not great at reading minds, but improvements are being made in bug detection, via AI tools.
Based on rules that the developers pre-think into the system. Think about it, in the sixties, one of the first tasks that they believed could be accomplished by the digital computer was to translate languages. How hard can that be, fer chrissakes? Plug a definition for each word into a huge dictionary, and then cross-reference it during translation time, adding a few hundred rules for grammatical nuances? It's fifty years later, and Google, with all of its top PHd engineers can't effing translate "mi casa es su casa". It's because language is incredibly fluid, dynamic and full of nuanced meaning that seems fantastically clear when we hear it.
Is this not the advantage of fuzzy logic?it circumvents bottlenecks due to mismatched identities. An AI needs context and empathy.
I'm not sure what you're getting at here. An AI needs to be able to interpret and deterimine context. Computers are notoriously poor at linking seemingly out of context things. The problem, here, politic is that we don't understand how or why the brain makes the connections it does with abstractions and disparate pieces of information. See my former post: I'm not saying I don't think an AI will never be developed, I contend that it will never be developed with a binary computer at its core. It will probably be something organic- and when it does work, we won't fully understand why.
Out of curiosity?what would happen if a software engineer and philosopher were codified into an expert system and devoted to self analysis and optimization? In essence a software agent that knows what type of questions to ask and how to program itself...before being primed for the external world.
You'd have a real AI, and I'd finally shut up.
A intellegnce is illusion.Human spirit is so strong,thousands new idea may arise in modern science. Donot afraid, man is greater than this A intellence. Futurists have habit to predicate bombardment and fear to common man, that one of the trick.
Ramesh is using one of those computer translating programs. Case...in...point.
Paul, and everyone else commenting, seems to agree with me that Alex the Parrot (R.I.P.) possessed "intelligence." But Paul says that computers possess no intelligence.
Yet, if I gave Alex the Parrot a set of regulations on air pollution from coke ovens (used for steel manufacturing) that was in Chinese, and Alex the Parrot did even a somewhat reasonable job in translating those regulations into English, I think everyone here would say, "That is one #&*^ smart parrot!"
Mark, you're losing me here. You're aggressively straddling two different things, then standing back, pointing and say "See what I mean?"
Alex the Parrot is intelligent. Alex the parrot is not a binary computer. Alex's intelligence is real. The binary calculator with the hard drive and memory on your desk is not. Even if Alex could do those things you suggest, hell, even if Alex could bridge the gap between the theory of gravity and large bodies with quantum physics, the binary computer on your desk didn't just a little bit smarter.
Can anyone commenting read them in the original Chinese? Now show the page to a parrot, and see if he or she can read them to you in English. Everyone commenting seems to agree that humans are intelligent, and that Alex the Parrot was intelligent. But when it comes to the problem of translating these regulations into English, how many humans or parrots can do so?
Two things here, Mark. I can't see the pages you translated, the links didn't come up right for me. So... I can't comment. As for the continuing discussion of all the amazing things that would be really amazing if Alex could do it... I'm still not sure where you're going with that. I guess I don't understand the point you're trying to make.
According to Paul (please correct me if I'm wrong), zero human brain equivalents were added in 1993, zero will be added in 2025, zero will be added in 2033, and zero will be added in 2040. Because computers aren't intelligent.
I think I understand your point here, and the answer would be "correct". By adding more processors (binary, computer processors) to the world, you're not adding any more human intelligences. Any more than you're not adding to human intelligences by building more torque wrenches, or putting in more thermostats.
You may be conflating "improving the human condition" with creating intelligences. Humans use computers as tools. Just like a mechanic uses a torque wrench. But he isn't any smarter. He's more effective. Kurzweil's calculation is flawed on its premises. The HBE (human brain 'equivalent") is a massive, crackling foundation that he bases everything else from. It's wrong... therefore everything that follows is wrong, too.
When it comes to predicting the future, rather than defining computers as unintelligent (possessing zero intelligence), a more useful way of assessing computers relative to human brains would be to simply compare the number of instructions per second processed.
Instructions of what. I've already pointed out that the modern computer is way...waaaay faster than the human brain. Processing more instructions does not an intelligence make. It's how those instructions are processed. Read some of my links above. I really think you'll find them informative.
Suppose in 1905 I had told you, "By 1945, we will have had two World Wars. The second one will end when an airplane flies over 1500 miles from Tinian Island to Nagasaki, Japan. The airplane will carry a 10,000 pound plutonium bomb, and that single bomb will have an explosive power of approximately 21 THOUSAND tons of TNT. More than 70,000 people will be killed instantly by that one bomb."
I'll bet you would have said, "That whole thing just strikes me as ridiculous. Airplanes flying 1500+ miles, carrying over 5 tons? A single bomb having the explosive power of 21 THOUSAND tons of TNT? And just what the &*%# is plutonium, anyway?"
In fact, you probably would have thought I was a lunatic.
Many philosophers and humanist thinkers are convinced that the quest for artificial intelligence (AI) has turned out to be a failure. Eminent critics have argued that a truly intelligent machine cannot be constructed and have even offered mathematical proofs of its impossibility. And yet the field of artificial intelligence is flourishing. "Smart" machinery is part of the information-processing fabric of society, and thinking of the brain as a "biological computer" has become the standard view in much of psychology and neuroscience.
Exactly. I said before(see my posts above) that AI research fell into two camps. The people like me who threw up their hands and said "eff it, I'm going to play World of Warcraft", and the other camp which merely said, "look we can make some nifty [tools] if we simply add more rules". After ten years of research, they had a robotic lawn mower and room vacuum. Hooray! This thread discussion practically mirrors that. I mean, read the title to the article "Rethinking the goals of [AI]". Exactly. We didn't create the AI that everyone understood we would in the sixties, so we'll just move the goalposts back. Waaaaay back.
Oh, and this statement is a false parallel: with an earlier endeavor that also sought an ambitious goal and for centuries was attacked as a symbol of humankind's excessive hubris: artificial flight.
The two aren't even close. I'm a technological optimist. I never would have been in the camp refusing to believe that artificial flight wasn't possible. I believe that faster than light travel is possible. Teleporters-- you name it. I even believe an AI is possible, but not with binary computing. Any more than those who INITIALLY believed in artificial flight thought it would be done by imitating a bird. Get it?
People were convinced that for man to fly, you'd have to create a machine that flapped like a bird. Many lives, hours and bits of wood were wasted making a machine which "flapped". Until one day someone said "we're going about this all wrong." AI will be the same way. Binary computers will be abandoned, something entirely different will be created, and then Kurzweil will still claim that he was "right" and will live in a bigger house than I do.
bomb will have an explosive power of approximately 21 THOUSAND tons of TNT. More than 70,000 people will be killed instantly by that one bomb."
I'll bet you would have said, "That whole thing just strikes me as ridiculous. Airplanes flying 1500+ miles, carrying over 5 tons? A single bomb having the explosive power of 21 THOUSAND tons of TNT? And just what the &*%# is plutonium, anyway?"
In fact, you probably would have thought I was a lunatic.
Not at all. Plus, we're getting into a series of false dichotomies. You're suggesting that because I don't think that the calculator on my desk is "intelligent", that I don't believe in technological advancement. I do. See my comment about machines flapping like birds. Technology doesn't expand linearly as some think. It goes on a generally evolutionary pathway, and periodically, a revolutionary event takes place. Computers are now on an evolutionary pathway. Until we create something that isn't a binary calculator requiring tedious and meticulous programming (rule setting) to make work, we're wasting our time.
We need to get back to our central theme: will an AI spontaneously appear by virtue of the fact that we're getting more and more computing power? No. It will not. Any more than artificial flight spontaneously appeared by creating more and more flapping pieces of plywood and burlap.
Ho, ho, ho! Well, at least you score brownie points for the funniest statement! In ***1905***, you wouldn't have said I was crazy if I talked about an *airplane* carrying an ATOMIC BOMB 1500+ miles, with that one bomb destroying a whole city!
You need to review history a little more carefully. Einstein did not even publish his Special Theory of Relativity until *September* of 1905!
I wasn't even responding to you, Paul. I was responding to Carl.
As I told you before, that's not even close to what Ray Kurzweil has been writing or saying. Have you even read any of Ray Kurzweil's books on computer intelligence or the Singularity (e.g. The Age of Spiritual Machines, or The Singularity is Near)?
Paul,
the problem with AI isn't binary computers, it is really a problem of us quantifying what we consider intelligence, which is a very hard problem. As someone who has studied Natural Language Processing ( which is computer processing of human language ) I can tell you that what has happened is that the experts have, like the experimenter above with the two presents, realized that human language is tied to intelligence, so that parsing it like a human will require a human intelligence. As an example, if you have to interact with someone with very low IQ you can't talk like a university professor and expect to be understood.
I believe what Mark Bahner is referring to the HBE stuff is that as we have better and better technology we can spend more time doing what we as humans can do that our machinery cannot. And what that will inevitably be is, finding ways to make our machinery do what it can't do yet. The apex of that is, of course, thinking like a human.
Paul, you put down the Roomba, but the simple fact is that they have managed to make commercial available, something that can take an unknown space and cover it all, and do the vacuuming sufficiently for many workloads. That is a huge first step, as the commercial world is fairly brutal. With a few other companies in that market, you could be done with all types of surface cleaning, forever. That wouldn't be a bad optimization; all those people that have the potential to do more than clean can then be trained to do things robots can't.
So in ending, once we can actually say what intelligence is, we can program it. We haven't gotten that done yet. The best we have right now is more or less self modifiying pattern recognition. ( Also note that in that department, psychnology as hard science -- with MRIs and such kinds of things is still young as well )
It's even more dramatic than that. Per Ray Kurzweil's projections of the number of calculations per second for $1000 dollars of computer, and per reasonable estimates of the number of personal computers into the future, the computing capacity added in the year 2033 will be on the order of 1 TRILLION human brains.
Now, someone could say, "Well, but the software will still be primitive."
But in 2035, when a $1000 personal computer can do 1 quintillion calculations per second (or approximately 50 times as many calculations per second as a human brain), even with primitive software, the computer will be able to do essentially anything a human brain can do, plus hundreds of things more and better (e.g., protein folding calculations in seconds, translating every human language on earth in real time, fusion energy calculations, etc.).
That's why a Singularity can be expected before 2050.
It is interesting, disturbing, and suggestive that recent findings suggest people become "stupider" as their access to and reliance on electronic intelligence boosters/substitutes increases. E.g.: many students now cannot do more than simple arithmetic; users of organizer apps on phones & PDAs don't know family birthdays or phone numbers, some not even their own numbers; all sorts of basic memory skills are atrophying through disuse. It doesn't quite seem to be true that these losses are compensated for by new skills and capacities, though that may evolve. But it's pretty hard to imagine balancing gains for losses in ability to think and remember.
Paul;
It is interesting to note that flapping flight is too complex for simple mechanical technologies, but not for more advanced feedback-driven systems. In particular, the very small fliers, for whom air functions something like a thin liquid, can be emulated. Discovery that a swimming motion, utilizing vortices and drag manipulation, was used by bio-fliers was key. Matching that with larger systems depends on many advances, not least being an adequate power to weight ratio.
Check out some of the work being done with bat flight, which turns out to be more flexible and "advanced" than bird flight, perhaps because it developed from the later mammal lineage.
The analogy to computers may be the neural net model, which is currently hard to use because the elements, the cells, are fiendishly complex and competent individually, far beyond current duplication. But that's not necessarily a permanent condition. Even with stupid hardware, neural nets learn and exhibit strange and powerful capacities and characteristics. As an example, patterns learned are enhanced after a period of "sleep and dreaming": essentially undirected self-stimulation and activity.
There are more things ...
My earlier comment regarding the ability of the conference attendees to publish was definitely incorrect in regard to their previous work. However, I'm not certain how I would complete a blind review of some of the talks from this conference if submitted to a more traditional venue.
It has been an interesting read through the literature. I have no answer as to the feasibility of these visions, but I do remain somewhat in the negative camp. Hopefully, some good discussion on the ethics of these technologies will emerge from this conference.
Mark,
The reason Paul is skeptical is ( assuming the proper training ) that adding speed alone won't significantly change things. The reason is a information/computer science theory of processing. The quick overview is that different processes take different amounts of work, based on how much information you are working with. A simple example is sorting; imagine taking a bunch of names and creating a phone book out of them. Doing this effectively can achieve a processing time that is close to linear, whereas doing it wrong will make it exponential.
Thus, to offset not putting some human ingenuity into it, you have to have exponential growth. The problem is that for the future, we need exponential speed and exponential data. For instance, to be able to process video similar to how humans do, we are working on algorithms which can take a picture and figure out what is there. To be able to do video, we would need to handle that job in less than real time and have a way to tie the individual frames of video together.
Another thing to note is that we aren't sure exactly how the human brain works, and processing speed alone is not the only measure used when doing work. You can have algorithms which take a small amount of data and work that a lot, but most problems we are concerned with as humans involve taking large amounts of data and compressing them into a smaller result set. That means that just speed alone isn't important, it's a matter of pushing data through the system.
Ancillary to this is computers are starting to distribute work, multicores now, in the future everyone will have some sort of grid of computing; to make your computer faster you'll just add another processor. The difficulty in that is shuttling data between those processors. Our brain is very good as shuttling data.
But I'm optimistic. Optimistic that we can effectively take advantage of this distributed processing, and that as processing power ramps up, we can, as humans come up with better *modes* of processing that will take untenable processes into the realm of the probable.
It's worth noting explicitly that I (along with Ben G. and Sam A. and Peter V. -- not to mention Alan Turing) feel that the key is to build a system that can learn and "bring it up". This has a number of implications, which include the need to teach it boy scout virtues, but also the reasonable expectation that you won't see "advances" in specific task performance while research shifts from narrow pre-programmed skills to general learning ability.
Josh
I think one significant reason Paul is skeptical is that he has simply never read (at least in detail) what Ray Kurzweil has written.
Paul has implied that Ray Kurzweil ignores developments in computer software, and that Kurzweil only includes developments in computer hardware. This is simply not correct.
The calculation of "Human Brain Equivalents" (HBEs) added each year to the human population is *my* calculation. It includes only hardware (and thus probably overestimates the number of HBEs added). I included only hardware in the calculation so that it would be easy for anyone to check the calculation/projection several years into the future. All one would need to to is to determine the MIPs (million instructions per second) for a $1000 computer, and the number of personal computers manufactured worldwide each year.
In my calculation, based only on hardware, a $1000 personal computer becomes equivalent to a human brain in approximately 2022. In contrast, Ray Kurzweil includes assessment of software development, and estimates that a $1000 personal computer won't be equivalent to a human mind until 2029.
But either way (i.e., just including hardware, as I've done, or including hardware and software, as Ray Kurzweil has done), the amount of processing power added by computers by 2050 becomes absolutely staggering.
For example, Ray Kurzweil estimates that a single $1000 computer will have processing power equal to the entire human race by 2045. My hardware-based estimate has that milestone being reached even earlier...approximately 2038.
So even with Ray Kurzweil's estimates for software development (being slower than hardware development) computers vastly exceed human intelligence as early as the 2030s.
http://www.kurzweilai.net/meme/frame.html?m=1
thank u