The AlphaGo program devised by DeepMind has beaten a human master of the game of Go and is set to play the world's leading player of the game in March. Nearly two decades ago, IBM's Deep Blue beat world chess champion Gary Kasparov using basically brute force computation. The AlphaGo program is different. In an article in Nature, the artificial intelligence researchers at DeepMind explain how they developed the system. From the abstract:
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play.
DeepMind
We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
An accompanying editorial notes that AlphaCo's play is "intuitive" and that the folks at DeepMind do not know what the AlphaGo system is "thinking" when it makes a move. This observation provokes the editors to speculate about how we humans will have to deal with the advent of artificial intelligences whose workings we don't (and can't) understand:
As shown by its results, the moves that AlphaGo selects are invariably correct. But the interplay of its neural networks means that a human can hardly check its working, or verify its decisions before they are followed through. As the use of deep neural network systems spreads into everyday life — they are already used to analyse and recommend financial transactions — it raises an interesting concept for humans and their relationships with machines. The machine becomes an oracle; its pronouncements have to be believed.
When a conventional computer tells an engineer to place a rivet or a weld in a specific place on an aircraft wing, the engineer — if he or she wishes — can lift the machine's lid and examine the assumptions and calculations inside. That is why the rest of us are happy to fly. Intuitive machines will need more than trust: they will demand faith.
Just as reminder, Hebrews 11:1 declares: "Now faith is the substance of things hoped for, the evidence of things not seen."
Is the age of computational thaumaturgy about to dawn?
Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com
posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary
period.
Subscribe
here to preserve your ability to comment. Your
Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the
digital
edition and archives of Reason magazine. We request that comments be civil and on-topic. We do
not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments
do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and
ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
"This observation provokes the editors to speculate about how we humans will have to deal with the advent of artificial intelligences whose workings we don't (and can't) understand"
Uh uh, they do. The ready control by stateside operators of Predators and Raptors also has "cruise control." We absolutely could, in a week or less, enable them to look for a person using various search criteria, and when a percentage of confidence that we use now for our "kill" decisions is reached, just go ahead, robo raptor, and squeeze off a "hellfire."
Not to mention the smaller drones that could serve as spotters for high flying predators that have very long loiter times. The hummingbird we have now could be replaced by a dragonfly, and later by a small cloud of filth flies. When they make out the target with the desired degree of confidence, the Predator drops a smaller warhead, smaller missile that glides freely to close with the target and is precisely guided in so that it hits him/her, not just goes in a window or down a chimney and blows up the whole house.
That takes AI now, and you better believe we have it. We just haven't applied the "small spotter drone" concept at all yet, to my knowlege. It certainly follows our current paradigm of the White House harkening back to Johnson during Viet Nam calling in bombing and artillery strikes. Modern day, Obama campaigned for re-elect partly by explaining that he himself approved every drone kill, using a personally approved kill list.
"Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination."
The robots won't *need* to kill - they'll just manipulate events to their benefit and if some troublesome element is 'eliminated' (permanently or otherwise) well, that's just kismet.
And WHO does the "deep neural network" say that I should vote for, anyway? ... OK, and I will NOT ask "why", then, either... The "deep neural network" knows things that we mere carbon units cannot know...
There's actually quite a few scientists that believe we could all be living in a simulation. Why not? It makes as much sense as any other theory about where did all this 'stuff' come from.
The problem is that at the quantum level, nothing exists unless you observe it. So how exactly does the world exist at the macro level when you are not looking? And if it doesn't exist, how the hell does it come into existence when you look? Saying it is just a simulation explains that.
That depends on what you mean by "exists" in a metaphysical sense. Most experts in quantum mechanics wouldn't say that a particle doesn't exist until you observe, just that it's state is unknown.
Quantum Mechanics say that your particles are 13,000 miles over-due for a Quantum check-up, and your Quantum oil pan needs to be re-synchronized, pronto!!!
Given that most people are complete idiots, who I have severe doubts as to whether or not they're actually even self-aware, its not a hard test to pass.
Intuitive machines will need more than trust: they will demand faith.
That is ridiculous. If that's true than any data driven classification system is "intuitive" and requires "faith" since we don't necessarily know a prior why the distinctions are drawn where they are. By that logic Principal Component Analysis requires faith.
Yes, but the flaw is even larger. Any machine or process requires faith by this definition. You have to have faith that the structural members of your house won't collapse, that the gas in your car won't detonate, that the insulation on a light switch won't fail and electrocute you. Hell, Viterbi decoding is literally guessing (in a very educated and robust way) the correct output from a channel.
Extremely few things in life are mathematically verifiable to produce the right answer at all times.
An accompanying editorial notes that AlphaCo's play is "intuitive" and that the folks at DeepMind do not know what the AlphaGo system is "thinking" when it makes a move.
Said it before, but I'll say it again: if/when we do develop an "intelligent" machine, we won't know how it works any better than we know how our own brains work.
And it won't be any more controllable than a human being. Thus no amount of "prime directives" will keep it from doing harm if it decides to do so. And if it isn't reliable and predictable, how would it be useful?
And if it isn't reliable and predictable, how would it be useful?
This is why linear models are still preferable even when vastly oversimplifying. If they model something well enough, then we ought to be able to predict, control, and make use of whatever the system is, well enough.
This observation provokes the editors to speculate about how we humans will have to deal with the advent of artificial intelligences whose workings we don't (and can't) understand:
*can't* is exceedingly presumptive, bordering on fanatical. I don't know and can't evaluate the precise state of a dog's brain as it catches a frisbee. It doesn't necessarily make a dog catching a frisbee miraculous or the inner workings of a dog's mind forever unknowable.
There have been an inordinate number of articles lately on AI. Is Robby trying to tell us something? Is he trying to sneak a warning past Guardian/Colossus?
Or has Robby already uploaded his consciousness? Or maybe he never was an organic being to begin with?
Intuitive machines will need more than trust: they will demand faith.
Bullshit. Empiricism isn't going away. And I don't see any reason why humans couldn't learn why a machine recommended a particular course of action through the scientific method, like we've been learning everything else for centuries now.
"I don't see any reason why humans couldn't learn why a machine recommended a particular course of action through the scientific method"
It makes a lot more sense to use the substitution method. Science is about measurement and observation of phenomena, and the flow of electrical currents within a computer can be measured and observed but it would be hellishly difficult to interpret them. Computers are more amenable to a mathematical approach. Computer 'science' is something of a misnomer. It's closer to math or engineering.
An accompanying editorial notes that AlphaCo's play is "intuitive" and that the folks at DeepMind do not know what the AlphaGo system is "thinking" when it makes a move.
Hmm...really? They can't have it log the operations it used to come up with a decision and print them out?
Yeah. Just like with the canine logic machine; at some point the frisbee catching intuition machine is going to conflict with the vehicle evasion intuition and evolution will dictate introspection will become a necessity (again) and miracles will ("by necessity") cease to exist (again).
Nobody will understand it much more than the overwhelming majority of people today understand compiler design, it doesn't mean compilers are crafted by mystic principles and run on faith and that programs can't be reverse engineered or decompiled/disassembled.
SPOILER ALERT!
Twilight Zone
Season 5 Episode 7
November 8, 1963
(James Coburn, John Anderson)
en.wikipedia.org/wiki/The_Old_Man_in_the_Cave
"A clash of wills ensues and, frustrated by Goldsmith's quiet and steadfast refusal to bend, French tries to dispel the townspeople's strange beliefs about the seemingly infallible old man in the cave and take control of the area. French tempts the townspeople with some of the food Goldsmith claimed was contaminated and many throw caution to the wind and partake. Everyone except Goldsmith eventually consumes the food and drink and Goldsmith falls into disfavor among the townspeople. After being bullied and threatened with his life, Goldsmith finally opens the cave door and it is ultimately revealed that in reality, the townsfolk have been listening to a computer the whole time. In a fit of rage at being deceived, the people of the town destroy the computer. However, as Mr. Goldsmith had insisted, the "old man" was correct; without an authority figure to tell them which foods are safe, the entire human population of the town (including the soldiers) die horribly ? except for the lone survivor, Mr. Goldsmith."
Hi, I'm Tracy, i had my friend help me hack my ex's email cause i suspected he was cheating. all he asked for was his phone number. Contact him now, his email is hacksolution7@gmail.com..IF u need help tell him Tracy referred you to him and he'll help. at first i did not give much thought, but my mind was still bothered .so i decided to contact the hacksolution7@gmail.com to help catch my cheating spouse,he delivered as was promised he is really a genius,he also does P.I jobs, clears your record, passwords,I love him and his work. you should try it. Good lucknmm
I'm sorry but neural networks, even deep neural networks are hardly new. The fact you don't know what training caused the neural network to reach a result is a feature, not a bug. How they set up these two neural networks (one to evaluate board positions and one to suggest moves) may not have been possible until recently (lack of computational power), but the theories were developed over time. In my view, Go is uniquely well suited for this approach as well. That being said, I'd be fascinated to hear how well the program does against the masters the 3rd to 10th time it plays them.
An accompanying editorial notes that AlphaCo's play is "intuitive" and that the folks at DeepMind do not know what the AlphaGo system is "thinking" when it makes a move.
This is not quite right. They know what it CAN notice because they've put parameters for that into it.
When a conventional computer tells an engineer to place a rivet or a weld in a specific place on an aircraft wing, the engineer ? if he or she wishes ? can lift the machine's lid and examine the assumptions and calculations inside. That is why the rest of us are happy to fly. Intuitive machines will need more than trust: they will demand faith.
Likewise they can--and most likely have--set it up so they can do just this, open it up and see how it got from point A to 'weld here'.
"This observation provokes the editors to speculate about how we humans will have to deal with the advent of artificial intelligences whose workings we don't (and can't) understand"
So long as it doesn't involve killer robots.
I mean, we have drones, but AFAIK the drones don't have artificial intelligence.
Not,
Uh uh, they do. The ready control by stateside operators of Predators and Raptors also has "cruise control." We absolutely could, in a week or less, enable them to look for a person using various search criteria, and when a percentage of confidence that we use now for our "kill" decisions is reached, just go ahead, robo raptor, and squeeze off a "hellfire."
Not to mention the smaller drones that could serve as spotters for high flying predators that have very long loiter times. The hummingbird we have now could be replaced by a dragonfly, and later by a small cloud of filth flies. When they make out the target with the desired degree of confidence, the Predator drops a smaller warhead, smaller missile that glides freely to close with the target and is precisely guided in so that it hits him/her, not just goes in a window or down a chimney and blows up the whole house.
That takes AI now, and you better believe we have it. We just haven't applied the "small spotter drone" concept at all yet, to my knowlege. It certainly follows our current paradigm of the White House harkening back to Johnson during Viet Nam calling in bombing and artillery strikes. Modern day, Obama campaigned for re-elect partly by explaining that he himself approved every drone kill, using a personally approved kill list.
"Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination."
The robots won't *need* to kill - they'll just manipulate events to their benefit and if some troublesome element is 'eliminated' (permanently or otherwise) well, that's just kismet.
+1 Colossus
"Forty-two."
I AM THE ESCHATON. THOU SHALT NOT VIOLATE CAUSALITY IN MY LIGHTCONE
I for one, welcome our new AI overlords. They can't be as stupid as politicians, hence the use of the word 'intelligence'.
And WHO does the "deep neural network" say that I should vote for, anyway? ... OK, and I will NOT ask "why", then, either... The "deep neural network" knows things that we mere carbon units cannot know...
Well, we are several years overdue for the beginning of CASE NIGHTMARE GREEN.
"Is there a God?"
"There is *now*!"
There's actually quite a few scientists that believe we could all be living in a simulation. Why not? It makes as much sense as any other theory about where did all this 'stuff' come from.
The problem is that at the quantum level, nothing exists unless you observe it. So how exactly does the world exist at the macro level when you are not looking? And if it doesn't exist, how the hell does it come into existence when you look? Saying it is just a simulation explains that.
That depends on what you mean by "exists" in a metaphysical sense. Most experts in quantum mechanics wouldn't say that a particle doesn't exist until you observe, just that it's state is unknown.
That's . . . not what Quantum Mechanics says.
Quantum Mechanics say that your particles are 13,000 miles over-due for a Quantum check-up, and your Quantum oil pan needs to be re-synchronized, pronto!!!
It seems like all you guys are passing the Turing test. Hmm...
Given that most people are complete idiots, who I have severe doubts as to whether or not they're actually even self-aware, its not a hard test to pass.
Is the Age of Transwarp causing us to de-evolve into giant horny salamanders upon us?
-2 Brannon/Braga
Intuitive machines will need more than trust: they will demand faith.
That is ridiculous. If that's true than any data driven classification system is "intuitive" and requires "faith" since we don't necessarily know a prior why the distinctions are drawn where they are. By that logic Principal Component Analysis requires faith.
^^THIS^^
Our "faith" will be based on actual performance and data. And that is not "faith".
PCA (PBUH)
Yes, but the flaw is even larger. Any machine or process requires faith by this definition. You have to have faith that the structural members of your house won't collapse, that the gas in your car won't detonate, that the insulation on a light switch won't fail and electrocute you. Hell, Viterbi decoding is literally guessing (in a very educated and robust way) the correct output from a channel.
Extremely few things in life are mathematically verifiable to produce the right answer at all times.
Said it before, but I'll say it again: if/when we do develop an "intelligent" machine, we won't know how it works any better than we know how our own brains work.
And it won't be any more controllable than a human being. Thus no amount of "prime directives" will keep it from doing harm if it decides to do so. And if it isn't reliable and predictable, how would it be useful?
And if it isn't reliable and predictable, how would it be useful?
This is why linear models are still preferable even when vastly oversimplifying. If they model something well enough, then we ought to be able to predict, control, and make use of whatever the system is, well enough.
Well enough is a bitch, though.
And if it isn't reliable and predictable, how would it be useful?
Neither markets or evolution or human beings are reliable or predictable. They are still useful.
This observation provokes the editors to speculate about how we humans will have to deal with the advent of artificial intelligences whose workings we don't (and can't) understand:
*can't* is exceedingly presumptive, bordering on fanatical. I don't know and can't evaluate the precise state of a dog's brain as it catches a frisbee. It doesn't necessarily make a dog catching a frisbee miraculous or the inner workings of a dog's mind forever unknowable.
Are unsolvable equations or irrational numbers "miraculous"? If not, then I don't see how this is.
I'll admit to feeling an almost "spiritual" sense of awe when I first saw a proof of e^{i \pi} + 1 = 0 . Almost like the void really was staring back.
You just don't believe in Dog.
There have been an inordinate number of articles lately on AI. Is Robby trying to tell us something? Is he trying to sneak a warning past Guardian/Colossus?
Or has Robby already uploaded his consciousness? Or maybe he never was an organic being to begin with?
Oops, I meant Ron Bailey, not Robby. Brain fart!!
An 'edit' feature would have been nice.......
Why? - Robby's hair is already on to you.
Oops, I meant Ron Bailey, not Robby. Brain fart!!
So much for deep neural network.
Insinuating that you have a brain. Nice try, HAL.
Intuitive machines will need more than trust: they will demand faith.
Bullshit. Empiricism isn't going away. And I don't see any reason why humans couldn't learn why a machine recommended a particular course of action through the scientific method, like we've been learning everything else for centuries now.
"I don't see any reason why humans couldn't learn why a machine recommended a particular course of action through the scientific method"
It makes a lot more sense to use the substitution method. Science is about measurement and observation of phenomena, and the flow of electrical currents within a computer can be measured and observed but it would be hellishly difficult to interpret them. Computers are more amenable to a mathematical approach. Computer 'science' is something of a misnomer. It's closer to math or engineering.
Hmm...really? They can't have it log the operations it used to come up with a decision and print them out?
Yeah. Just like with the canine logic machine; at some point the frisbee catching intuition machine is going to conflict with the vehicle evasion intuition and evolution will dictate introspection will become a necessity (again) and miracles will ("by necessity") cease to exist (again).
Nobody will understand it much more than the overwhelming majority of people today understand compiler design, it doesn't mean compilers are crafted by mystic principles and run on faith and that programs can't be reverse engineered or decompiled/disassembled.
SPOILER ALERT!
Twilight Zone
Season 5 Episode 7
November 8, 1963
(James Coburn, John Anderson)
en.wikipedia.org/wiki/The_Old_Man_in_the_Cave
"A clash of wills ensues and, frustrated by Goldsmith's quiet and steadfast refusal to bend, French tries to dispel the townspeople's strange beliefs about the seemingly infallible old man in the cave and take control of the area. French tempts the townspeople with some of the food Goldsmith claimed was contaminated and many throw caution to the wind and partake. Everyone except Goldsmith eventually consumes the food and drink and Goldsmith falls into disfavor among the townspeople. After being bullied and threatened with his life, Goldsmith finally opens the cave door and it is ultimately revealed that in reality, the townsfolk have been listening to a computer the whole time. In a fit of rage at being deceived, the people of the town destroy the computer. However, as Mr. Goldsmith had insisted, the "old man" was correct; without an authority figure to tell them which foods are safe, the entire human population of the town (including the soldiers) die horribly ? except for the lone survivor, Mr. Goldsmith."
The Beginning of DOOOOOOMMMMM!
Ron may have to write a new book, if They will let him.
Hi, I'm Tracy, i had my friend help me hack my ex's email cause i suspected he was cheating. all he asked for was his phone number. Contact him now, his email is hacksolution7@gmail.com..IF u need help tell him Tracy referred you to him and he'll help. at first i did not give much thought, but my mind was still bothered .so i decided to contact the hacksolution7@gmail.com to help catch my cheating spouse,he delivered as was promised he is really a genius,he also does P.I jobs, clears your record, passwords,I love him and his work. you should try it. Good lucknmm
Again, pics or it didn't happen.
I'm sorry but neural networks, even deep neural networks are hardly new. The fact you don't know what training caused the neural network to reach a result is a feature, not a bug. How they set up these two neural networks (one to evaluate board positions and one to suggest moves) may not have been possible until recently (lack of computational power), but the theories were developed over time. In my view, Go is uniquely well suited for this approach as well. That being said, I'd be fascinated to hear how well the program does against the masters the 3rd to 10th time it plays them.
"The machine becomes an oracle; its pronouncements have to be believed."
Yeah, those global warming models are so good we must sacrifice to save humanity.
This is not quite right. They know what it CAN notice because they've put parameters for that into it.
Likewise they can--and most likely have--set it up so they can do just this, open it up and see how it got from point A to 'weld here'.
So, no 'faith' needed.