Here Comes Artificial Intelligence
Ray Kurzweil's new book imagines man-made minds.
How to Create a Mind: The Secret of Human Thought Revealed, by Ray Kurzweil. Viking, 336 pages, $27.95.
High-functioning artificial intelligence is the stuff of science fiction: the malicious HAL in 2001, the malevolent machines in Battlestar Galactica and The Matrix, the Butlerian Jihad in Frank Herbert's Dune series. Charles Stross' novel Accelerando describes the Matrioshka brain, an artificial mind that requires the energy of a star to function.
But the idea won't necessarily be science fiction forever, and we may have to take the concept of artificial intelligence (AI) seriously sooner than many expect. The ongoing acceleration in technology has prompted serious discussions of AI, including the possibility that the "Singularity"—the creation of a greater-than-human intelligence—might occur. In his 2005 book The Singularity is Near, the futurist Ray Kurzweil predicted that we can expect the Singularity by 2045 and that superintelligences will eventually colonize vast swathes of galaxies. In his latest book, How to Create a Mind, Kurzweil argues that reverse-engineering a human brain is the best route to creating high functioning AI.
Kurzweil begins by examining the neocortex, the uniquely mammalian part of our brain. The neocortex's hundreds of millions of pattern recognizers, he reports, allow for such rare abilities as language, speech, creativity, and the ability to form evolutionarily advantageous emotions such as love.
Inevitably, Kurzweil considers the question of consciousness, which he argues can emerge from purely physical components. Kurzweil subscribes to a subschool of panprotopsychism—the view that, broadly speaking, all matter has mental properties. According to this account, there is no reason why computers should not be able to experience consciousness.
The discussion of mind and body problems is frustratingly brief. Kurzweil uses the word "mind" rather than "brain" in his title, he explains, "because a mind is a brain that is conscious." This is something of a philosophical leap. While a book aimed at the layman might not be the best place for a detailed discussion of consciousness, the relationship between mind and body, and free will—ideas that have engaged philosophers from Anaxagoras to Galen Strawson—it would have been nice to have seen a more thorough defense of the author's views. I don't just mean Kurzweil's views of how the mind works. If high-functioning AIs arrive, the primary philosophical issues they raise will be ethical, not technological. Kurzweil, who accepts the moral standing of machines that appear conscious, predicts that we will eventually accept them as equals, and he asserts (while admitting it is something of a leap of faith) that when machines become capable of convincingly describing their experiences, they will constitute conscious persons. If this does occur, a robust philosophical defense of this position will have to be at the ready.
As with anything that Kurzweil writes, there is a question of how accurate his past forecasts have been and how seriously we should take his thoughts on the future. The prediction that the singularity will be upon us by 2045 has come under particularly skeptical criticism.
Kurzweil addressed the state of his predictions in an essay, "How My Predictions Are Faring," published in October 2010. According to his own assessment, a clear majority of the forecasts he made in The Age of Spiritual Machines, The Age of Intelligent Machines, and The Singularity is Near have been "essentially correct" or "correct," including his predictions that cloud computing would become more mainstream, that portable computers would become much lighter, and that those portable computers would be able to access libraries and information services. Almost all of Kurzweil's predictions rest on the validity of the Law of Accelerating Returns: in his words, that "fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that 'you can't predict the future.'"
Kurzweil obviously takes objections seriously, dedicating a chapter to answering them near the end of How to Create a Mind. He spends less time illustrating how promising AI could be in the short term. The discussion of health is especially brief. Towards the end of the book he raises the possibility that nanobots could monitor and repair cell damage, a technology that would have huge implications for the treatment of chronic diseases like cancer and diabetes. Surely this deserves detailed discussion. Instead we plunge into a vision of superintelligences overcoming the speed of light and colonizing the galaxy.
Still, that promise is there. If Kurzweil is right, we can look forward to prolonged life expectancy, cures for serious diseases, and social changes that would dwarf the significance of the industrial revolution. His optimism about an AI-assisted future is contagious, even if those visions of Matrix-style enslavement still lurk somewhere in the corners of your mind.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
A couple jobs ago, we "reverse engineered" our way to 3.0 hours per unit on an engine line (from over 4.2 HPU)to ensure we were competitive. And that was in a UAW facility.
If we can do that, I think it's pretty clear we can "reverse engineer" the human brain.
This is all really about robot sex, isn't it?
I don't know. I'd think that part of the appeal of robot sex would be that they don't have a mind of their own.
The mind is used to operate the sexbot stuff in creative ways, not for, you know, talking and stuff. I assume the sexbot market isn't about replacing women. Or men, for that matter.
I think one of the keys will be ensuring that we have robots who can CREATE teh pron for human consumption. Films and pics that look real, but are the creation of robots.
Cause, y'know, man gots ta have some strange to keep up interest...
We already have robot sex. Fapping while watching broadband pr0n is having sex with a machine. Wrapping the machine in a human-like body and equipping it with a tongue and orifices for penetration is simply an upgrade, not something entirely different.
And there will be demand for a sex robot that can carry on an intelligent conversation with a man afterward, but absent any desire for monogamy or any sort of exclusivity in a relationship or jealousy -- though rule 34 says there will even be a niche market for that.
So, was fapping to my dad's old Penthouses having sex with a magazine?
I don't think it counts until you get the "hands free" attachment for your computer.
Take a computer hooked up to porn. Attach a body frame to it, with padding to grasp, complete with a fleshlight masturbator.
Then you'd unambiguously be having sex with a machine -- but the core of it would still be the images and sounds from the computer projecting into your brain.
Come on, that's not robot sex. Robot sex is when you have sex with a robot. Like what's already happened on the ISS.
Look, I don't care what two robots do in the privacy of their own home, or "pod". I just don't want it in my face all the time.
Eh. With Kurzweil, the Singularity is always near but never here. Conveniently. I think the dude is smart, but the standard log curves of many natural systems (think prey animal population growth when predators are removed) look exponential until after the inflection point. And you might go a good ways before your exponential model can't be tweaked if you look at the wrong part of the curve.
I think true AI is going to be one of those things that just happens, when we least expect it. Intelligence in life seems to develop that way.
And by then, it will be too late.
Spontaneous doom.
the standard log curves of many natural systems (think prey animal population growth when predators are removed) look exponential until after the inflection point.
This same point comes up in every thread about Kurzweil. The argument isn't simply "look at these growth curves, let's extrapolate", but rather that technological growth arises from development of individual technologies that do follow sigmoidal growth curves, but older technologies are constantly replaced by newer technologies that leverage upon them, launching new, faster growth curves, etc.
That's, like, sort of the main argument of the book, and it may be wrong, and maybe I'm not explaining it quite right, but it's there, and it's virtually always ignored.
"With Kurzweil, the Singularity is always near but never here."
No, it's been at a definite time in the future, and he hasn't been pushing that time out as the years have gone by.
He has been making predictions for a long time, and they've been pretty accurate. It's just ignorant to treat him like a cult prophet who predicts the end of the world every week, and then pushing out the prediction every time the end doesn't come.
http://spectrum.ieee.org/compu.....futurism/0
Computer programs are only as smart as the people who write them.
So far that seems to be true. But it is not obvious or given that that will always be the case. And of course, it depends on what you mean by "smart".
I've seen some really interesting FPGA self-optimization that did stuff no person ever thought to do. Of course, someone still has to determine what the optimal scenario looks like.
A computer chess program can beat the programmer who wrote it. Thus, it is smarter at chess than its creator.
It's not smarter, it's just able to compute the results of potential moves faster.
"it's not smarter at chess than me, it just happens to be able to beat me every single time"
I was going to give you a thoughtful reply, but after reading your 11:56AM comment I decide it would be a waste of time.
Writing a response not loaded with specialized jargon, to an audience not familiar with that jargon, is a thoughtful reply.
Anyone can spew the thoughts in their head -- the trick is getting others to understand you.
I was able to explain anarcho-capitalism to my 12 year old using nothing but simple words, and illustrating my point by slowing down the the 25 MPH speed limit to demonstrate the absurdity of most government law.
I've also explained E=mc Squared to her, using simple words, back when she was 10 or so.
Not a good example. Progressives assume all anarcho-capitalists have the intellectual development of 12-year-olds.
You lack the base knowledge to make a reply worth the effort.
Can it write a program that can beat it at chess?
You should take a look Tom Ray's Tierra software.
Just the same way as the market can exceed any one trader's "smarts", software can exceed its creators' "smarts" (usually in particular, well defined areas, at least so far).
That's still just iterating and recursing over endless loops.
It's not intelligence. It's busy work.
just iterating and recursing over endless loops
... which is what the human brain does in parallel.
Anyhow, my comment was about your statement "[c]omputer programs are only as smart as the people who write them", not about the nature of "intelligence".
Until they equal, and then surpass human intelligence, enabling them to write computer programmers that are far more intelligent than human beings.
"a mind is a brain that is conscious."
Consciousness is like pornography -- you know it when you see it.
You know it when you know it.
If you're a brain, and you're not sure if you're conscious or not, you are.
If he seriously means to build an intelligence by reverse engineering the brain, he's doomed to fail.
Intelligence no more resides solely in the brain than the ability to catch a ball tossed to one depends on the ability to solve the calculus describing the ball's path.
Mind =/= brain.
Regardless of whether brain is critical to the presence of mind. It may be necessary, but we have good reason to suppose that it is in no way sufficient.
No hugs for thugs!
What?
If mind /= brain, then mind = brain + x
What does x equal?
Superstition.
...and also, though I'm sure it's obvious, Superstition = zero.
What good reason do we have to believe that our minds have any existence beyond our physical brains?
I think the most important technologies are anything related to life extension. If we can overcome or mostly overcome the whole death problem, we'll have more time to figure everything else out.
solving death accelerates population growth problems.
if we solve death, and its spread to the public for use, it means mandatory sterilizations for all.
that means a change in political structure and social upheaval for free nations.
solving death basically equals the extinction of humanity. Next stop: genetically engineered surrogate "children" to replace us. not homo sapiens
There isn't a population growth problem. Life extension would also lead to increased productivity and allow people to work later into life. All the rest of that about sterilization and the extinction of humanity is nonsense.
Yes but how will the robots vote? Will they be conservatards or libtards? Will a male robot be able to marry another male robot? These are the questions we must ask ourselves.
Bioethicists gotta eat too, ya know.
Robots don't need to vote. They just take over.
Well, Richard Feynman has a much different take. I don't think you aren't ever going to get true AI or "real sentience" from current methods, no matter how complex or sophisticated. We'll be there when we can model or produce true non-determinism. All "non-deterministic" finite automata can still be converted into deterministic automata (the only difference is in the complexity, as compute time increases exponentially). Even throwing in a true random number generator (from atomic decay) to seed variables to introduce some element of randomness doesn't work as the algorithms utilizing them are themselves still deterministic i.e. y=f(x), you're choosing values of x at random, but you still know what y will be for any given x; it's still an 1:1 or onto/N:1 mapping. What is lacking is a system producing a 1:N mapping (a relation that is not a function), where you don't know what the outcome will be for the given inputs; no combinatorial or logical implication model. That is, you've isolated all possible inputs and you still cannot determine the output. You could say its creation is a paradox itself. But only then would you be able to say you've achieved the holy grail
cool story, bro
Try that again in non-jargon English, and maybe I'll try to read it.
Keep it simple enough so your audience can understand you.
I didn't think it was that difficult to comprehend. The basic point, which I think I agree with, is we don't have a way to build non-deterministic computers. Everything in them is pre-determined by the information we feed into them. There's never anything unexpected, or at least, never anything that doesn't have a specific cause that can be discovered.
"There's never anything unexpected, or at least, never anything that doesn't have a specific cause that can be discovered."
Do we know that that is not the case for the human mind?
I think this nut isn't so uncrackable. We just don't like the idea that we aren't somehow beyond artificial replication.
Do we know that that is not the case for the human mind?
Exactly. You can't PROVE that a human mind is NOT a deterministic meatsack full of chemical interactions. Just because it is currently too complex to decipher doesn't mean we're somehow above the laws of chemistry and physics, and thus reaaaaaally complex robots.
You can't prove a negative.
You certainly can prove lots of negative statements. I wish people would stop using the word "proof" for matters of fact. Science works on evidence, not proof.
Logic works by constructing proofs.
There's never anything unexpected, or at least, never anything that doesn't have a specific cause that can be discovered.
That's irrelevant outside of metaphysics, anyway. Determining the original seed value of even the simplest modular random number generator just by looking at the sequence would be an extremely computationally intensive task.
What is lacking is a system producing a 1:N mapping (a relation that is not a function), where you don't know what the outcome will be for the given inputs; no combinatorial or logical implication model.
I never thought of it that way, but you are exactly correct.
Reminds me of the other night when a deer was running across my neighbor's yard. Saw my car coming at an adjacent angle, it tried to decide what to do next and ended of stumbling and rolling before taking off back into the woods in the direction it came.
There are 1:N mappings all over the gorram place, they even have a name (multifunctions). logarithms of complex numbers, arcfunctions, etc.
He means nondeterministic algorithms, which are irrelevant for the reasons I gave above.
I'm not sure what your point is. The physical brain's activity is deterministic, it's just extremely difficult to ferret out the precise rules it's following. Don't give me lip about quantum mechanics either; those processes are only "random" for very small numbers of particles. Once you get to the tiniest bit of brain activity, you're involving enough particles that the law of large numbers takes over and it's all essentially deterministic.
Even throwing in a true random number generator (from atomic decay) to seed variables to introduce some element of randomness doesn't work as the algorithms utilizing them are themselves still deterministic i.e. y=f(x), you're choosing values of x at random, but you still know what y will be for any given x; it's still an 1:1 or onto/N:1 mapping.
I think you're misunderstanding the meaning of determinism here. If x is truly randomly distributed, and you don't know about the value of x, then the value y=f(x) will appear to also be randomly distributed.
The physical and chemical activity are deterministic but begs the question of causality. There is a catch-22 or chicken and egg issue. This is likely where quantum processes play a role.
I think you're misunderstanding the meaning of determinism here. If x is truly randomly distributed, and you don't know about the value of x, then the value y=f(x) will appear to also be randomly distributed.
Being randomly distributed does not imply non-determinism, which is why I brought it up. Determinism has nothing to do with patterns of domain/range. If f(x) = x + 2, then no matter if x was randomly supplied you always know for any given x that y will always be 2 greater than x.
Algorithmically, everything can be defined as a state machine graph with transition functions (the finite automata I mentioned). But because of the above catch-22 certain very real processes are non-deterministic, so not modeled with any known finite automata (no matter if it's O(n^n!) or whatever) In short, I don't refer to tractability/complexity but computability as the stumbling block.
To elaborate, for example, if we want to model it like two people thinking different thoughts about the same thing, assuming all their environment, circumstances the same, then such a model should produce different results (different paths) for the same automata and all the same inputs, but that would require a non-deterministic algorithm.
The brain is a slow, parallel processor; its ability to compute is low. So odds of us needing high computability to create AI (if chasing a human model) are also low.
The notable thing about the brain is the number and connectivity of its operating elements. We're going to need similar densities, although clearly, much of the human brain does things we won't need at all (autonomic heartbeat regulation, etc.) and we've more efficient ways to do others (such as speech.)
Re making a brain... Everything - and I mean *everything* - we know obeys physics. It is an absolute guarantee that as soon as someone starts to mumble about needing something outside of physics, they've stumbled straight into superstition and we can ignore them. It's an extraordinary claim, with no evidence.
Thus, AI will follow the rules just a biological intelligence does. And friends, there is nothing special going on inside your head. Chemical reactions, electrical signals, nutrition, connectivity.
The claim that we don't know what's going on, so it must be something special... that's disingenuous. Everything ever solved works in a mundane fashion. The odds are extremely high that we do, too.
It's a matter of lacking algorithms, processing density, and understanding - no more. They will come, and I *highly* doubt it'll be as long as 2045 before we have them.
Not exactly. Deterministic processes can generate output which is random for all intents and purposes: see the middle column of cells in the Rule 30 cellular automaton output.
I warmly recommend Valentino Braitenberg's Vehicles: Experiments in Synthetic Psychology (MIT Press 1984) for those who are interested in brains, minds, perceived randomness, free will and such issues. A short but delightful exposition, a thought experiment available for anyone who is able and willing to follow a clear train of thought.
Also excellent reading is Stephen Wolfram's "A New Kind of Science."
In it, he demonstrates some of the things CA are capable of (basically how some things we might consider complex from the outside, are trivially computable) and he makes some interesting (but unsubstantiated in my view) claims about how CA is likely underlying just about everything.
Still, fascinating reading and a work that can really slot CA in an interesting place re AI, AL, and nature in general.
Time stamps and indexes built of random shit, that's how I fake 'em out.
It's amazing what can be solved with randomization. Seriously
I write bots for Quake engines and Unreal Engine (I'm a masochist at heart), ontology is not very useful for coding simulated behavior. Me thinks it is mostly mental jacking off. Linear equations are fine for determining when a bot should crouch or run, and it is fine for simulating nerve synapses as well.
People tend to get really, really good at playing the games after hundreds of hours of doing so. A professional level Q3 player demands nothing less than a 200 frame per second cycle on his system to play the game. Anything less makes them feel trapped. That demands a lot of simplification in code design if you are writing a bot that challenges a top level player, and doesn't outright cheat. You throw out your pretty equations that are good at smacking newbies around and replace them with fugass look up tables.
Hitler was a human level consciousness. I hope we can do better than that.
You know who else was a human level consciousness?
What? No Skynet references? I am disappoint.
How did you slip that by their censors?!
I think the problem in replicating human intelligence lies in the self determining nature of meatbags.
Do we really want to develop a super intelligent computer that is self determining?
Will it then fear our fear of it and decide to remove a threat to its continued existence?
Computer programs do what computer programmers tell them to do. They do exactly what programmers tell them do do. They don't make mistakes. Programmers make mistakes. Programs don't learn. Storing information is one thing, but learning from it is quite another. Sure, they can make decisions based upon stored data, but those decisions will have been pre-programmed. If the programmer didn't anticipate it, then the program will not know what to do. Programs don't "guess". I suppose they could with some random number generator, but even that is not a true guess since any random number generator will produce the same results with the same seed. They're just doing what the programmer told them to do.
AI in any form close to science fiction is a loooooooooong ways away.
I think you are ill informed.
The way you describe AI sounds like how it was in 1970.
Back then someone was thinking like you and decided to make some progress.
Maybe you could figure out where they are at. There are many branches of AI and attempts to tackle this problem.
Regardless of whether you think strong AI is possible or remotely close to happening, the question for libertarians is whether the gubmint has any role is deciding whether the attempt can be made.
Of course there's no noise about restrictions on AI now; few people believe human level AI is close. But restrictions on genetic manipulation are already popping up, even though that's in relatively early stages. You can bet that when AI is advanced enough that people believe that human level AI is coming, the shouts to 'stop!' will be deafening.
For, the record, my meaningless guess is that human level AI is 100-200 years off. The most promising sign is some convergence in research interests between traditional AI, cognitive modeling, and theoretical neuroscience.
I was thinking the important issue for libertarians is "when does an artificial consciousness gain human rights?"
Fair point. Not necessarily an easy question, either, because the nature of their consciousness may be quite different their own.
My own opinion is that we should err on the side of granting and protecting rights, given the obvious evil of enslaving a conscious entity, even if its phenomenology is quite different than our own. Of course now
I'm backing into a PETA corner. Aaah!
lol... all you're going to need is a desktop computer. The gubmint will have about as much say in it as they do in your running some PD program that puts text on JPEGs.
One thing the Federal Government did well was eradicate slavery.
If AIs are conscious, and you can run one on your desktop computer, how is that distinct from slavery?
I guess you haven't actually read the 13th amendment. What the feds did was to take over slavery. They didn't eradicate it. They retain the authorized power to engage in both slavery and indentured servitude - all they need do is convict you of a crime. And that, these days, is a doddle.
If AI's live in computers, consisting of standard code and data, then a computer is equivalent to a body, not to a prison. With a WAN network connection, there's no reason to think an AI can't move from point A to point other. Point other may, or may not, be mobile. Or faster. Or have more memory. Etc. More than one AI could share the hardware. This kind of body- and place-shifting is more than we can do. Pretty awesome, really. Hardly slavery.
I expect it'll take a little while before AI and AI body technologies get to point where everyone is satisfied, though. We're not there yet either.
Computing power alone isn't going to get the job done. There's a lot of fundamental research yet to be done.
Policing research might be very difficult, but the luddites could slow things significantly by creating a culture of fear. Spreading fear could be quite effective because research is a fundamentally social activity.
Of course not. As I said above, it's a matter of algorithms, processing density and understanding.
However, it doesn't necessarily follow that "there's a lot of fundamental research to be done." Since we don't know what the solution is at the moment, we don't know how much research is required, either. It may be as simple as one fundamental breakthrough; it may not. We may get AI in human-model form as postulated by Kurzweil, or we may get something else entirely.
Luddites are no threat, because no luddite can stop me from writing and testing code; and once there is one AI... there can be many more in near zero time. It's an unstoppable, unslowable technology. Coding isn't a fundamentally social activity, either; it always comes down to one person, one computer. Finally, there's no assurance that traditional research will be the dam-breaker; this could come from any gaming company, any hacker's desk, anywhere, really. So it's highly premature to worry about third parties having deleterious effects.
Modeling of human intelligence is far enough along to believe that there will be no magic bullet in understanding it.
The magic lies in sophisticated interactions between very powerful, complex subsystems.
IMO, artificial intelligence will require a similar structure. Achieving that will require continued substantial collaboration between intelligent people.
No. It isn't. We have no model of human intelligence that isn't so high level and abstract as to be not only useless, but irrelevant. In fact, most of what we do have ? even the high level stuff ? is mushy psychobabble.
Here's what we know how to do that will count towards AI:
o traditional algorithmic solutions to micro-problems (math... lookups... pathfinding)
o associative memory
o vaguely neural network models
o fuzzy logic
o speech generation
o make serial architectures do parallel work
These are like six critical, but scattered, puzzle pieces from a puzzle where we don't know what the final puzzle looks like, where these pieces fit, how many more pieces there are, or even what the orientation of the puzzle is.
It's sheer hubris to think that some high level model of human behavior is relevant at this juncture in any way.
Something else: A baby comes unorganized, unspecialized as to language, etc., and in a year, it's well on the way to being what it needs to be. If you *really* want to use a human model, that's what you should be thinking about, not any adult behavior you can reasonably test out.
It will almost certainly be low level systems that solve this; because that's what everything works with.
Are you even listening to yourself?
Attempt to reconcile the above statement with this single fact: We don't know anything about how the brian actually represents and processes information at anything approaching a low level.
So how can you possibly say that you know what kind of interactions are going on, and which ones are key?
We just don't know. So maybe lighten up on the "has to be this way" until we actually, you know, have an idea of (the/one of the?) way(s) it works.
What we know is basic physics; basic serial architecture computing; very high level human behavior (kinda useless at this point); and very low level, partial information or how neurons act and are connected, without info on many other types of cells and connections, and no info on what's being represented, or how it's being processed, at that level.
The day someone figures it out, we'll have AI. Or the government will, anyway (if they get there first.) Until then, we Just Don't Know.
The 'magic is in the interactions' comment referred to the viewpoint developed in computational models like Soar and ACT-R. Opinions differ sharply on the validity or usefulness of those models, of course.
As to what we do or don't know about how information is represented at a low level, it is true that there are no widely accepted theories. There is a lot of interesting work being done, however. Take a look at 'Theoretical Neuroscience', by Dayan and Abbot, for example. I found the math painfully slow going, so I'll admit I've only read random parts of it.
I'm buried in this stuff. Models abound. None of them have so far been of any use in going after the primary goal; and in that context, I don't give any of them any more weight than any of the others.
In my work, I am painfully often reminded that just because we can describe a baseball pitch as simultaneous linear equations, that doesn't mean the body is doing so. You follow? It's a consequence, not a cause. This can be true of any function within us.
My suspicion -- no more valid than anyone else working in the field -- is that we should be looking for something very, very simple, and when we find it, there's going to be some forehead smacking among the ivory tower types.
So many problems have fallen just this way. Just an intuition.
Anyway, I close as I started: We don't know how this works, so we don't know what will solve it. All we have done so far is figured out a few things that don't solve it. The one thing I am certain of is that the solution will be in the same realm as everything else: mundane physics.
Inevitably, Kurzweil considers the question of consciousness, which he argues can emerge from purely physical components.
Which is self-evident, since we are all conscious, and made of purely physical components.
Whichs begs the question of intelligent life in stars or space.
The components for minds are more abundant elsewhere than class M planets orbiting stars in goldylocks zone.
Anybody who's programmed or learned about computers and knows about the brain knows we're light years from creating anything like it.
A recent study revealed that because of some few-thousand different chemicals or chemical combinations than any one neuron can store (think of it as as a transistor storing a "number"), any one human brain actually has more switches than all the computers in the world combined.
We're no where near reaching that kind of density with any kind of technology we have. Not to mention we'd have no idea how to architect and program it. Indeed, computers have all the circuits designed together to run one process, and that's what we're used to designing/programming, whereas the brain has a few hundred circuits running indepenmdently at one time, and who knows how many different ways they can interact with each other (immediate shutoff of another process, adding data to another) that we have no technology to do.
Artificial Intelligence is normally known in the short form AI.Artificial intelligence is actually an applied and basic topic of Computer Science.