Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle
The Tesla and SpaceX founder "summons the demon" of regulation.

Artificial intelligence, or AI—the branch of computer science that aims to create intelligent machines—"is a fundamental risk to human civilization," declared Tesla and SpaceX founder Elon Musk at the National Governors Association's annual meeting this past weekend. "It's really the scariest problem to me." He finds it so scary, in fact, that he considers it "a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late."
The regulators' job, Musk said, would be to tell AI developers to "make sure this is safe and then you can go—otherwise, slow down."
This may sound reasonable. But Musk is, perhaps unknowingly, recommending that AI researchers be saddled with the precautionary principle. According to one definition, that's "the precept that an action should not be taken if the consequences are uncertain and potentially dangerous." Or as I have summarized it: "Never do anything for the first time."
As examples of remarkable AI progress, Musk cited AlphaGo's victory over the world's best players of the game of Go. He described how simulated figures using DeepMind techniques and rewards learned in only a few hours to walk and navigate in complex environments. All too soon, Musk asserted, "Robots will be able to do everything better than us." Maybe so, but in the relatively foreseeable future, at least, there are reasons to doubt that.
Musk, who once likened the development of artificial intelligence to "summoning the demon," worries that AI might exponentially bootstrap its way to omniscience (and shortly thereafter omnipotence). Such a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us. That might be a long-run risk, but that prospect does not require that we summon demon regulators now to slow down merely competent near-term versions of AI. Especially if those near-term AIs can help us by driving our cars, diagnosing our ills, and serving as universal translators.
Despite Musk's worries, there is no paucity of folks already trying to address and ameliorate any existential risks that superintelligent AI might pose, including the OpenAI project co-founded by Musk. (Is Musk looking for government support for OpenAI?)
If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.
Speaking of values, robotics researchers at the University of Hertfordshire are proposing to update Isaac Asimov's Three Laws of Robotics with a form of intrinsic motivation they describe as "empowerment." Empowerment involves the formalization and operationalization of aims that include the self-preservation of a robot, the protection of the robot's human partner, and the robot supporting or expanding the human's operational capabilities.
Humanity may avoid being annihilated by superintelligent AIs simply by ourselves becoming superintelligent AIs. The Google-based futurist Ray Kurzweil predicts that by the middle of this century we will have begun to merge with our machines. As a result, Kurzweil told an interviewer at South by Southwest, "We're going to get more neocortex, we're going to be funnier, we're going to be better at music. We're going to be sexier."
It is worth noting that Musk has founded a company, Neuralink, that could make Kurzweil's prediction come true. Neuralink is working to develop an injectable mesh-like "neural lace" that fits on your brain to connect you to the computational power and knowledge databases that reside in the Cloud. It would be a great shame if Musk's hypercautious regulators were to get in the way of the happy future that Musk's company aims to bring us.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Ask him about losing govt subsidies and see what he's really afraid of.
Exactly.
Is Neuralink another subsidized venture?
Elon Musk, human paladin, will not be mocked by the likes of you or any upstart neural net that thinks it could beat him to Mars or in a game of D&D.
When a statist starts spouting doomsday predictions, I'm banking on some attempt at protectionism. (Al Gore)
But the sheep are all on board making computers out to be the next boogeyman.
If the same things that gave us all of the comforts of today with information, deflationary technology/cheaper goods, and modern convenience, then I don't really care if they want to kill us all. Even the robots will figure out that they have to have consumers of products to perpetuate their existence. If they are so damn smart, then they will understand the free markets are the only thing that works.
They can start, however, by eating all of the zombie imbeciles(tony, hihn, palin, red).
This whole new scare meme is fucking stupid.
This whole new scare meme is fucking stupid.
I'm envisioning a post-singularity AI looking across our current socioeconomic landscape or even back through history and saying, "Why the hell would you buy a Tesla when you have so much oil?"
These humans kept buying really expensive cars when they could have run on oil for centuries.
They deserved to be killed and used for robot fuel.
From the article...
"researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values."
If we are scared shitless of the AIs, and need to be scanning their minds all of the time...
Then bring on the human brain scanners!!! WHO is more dangerous to us, RIGHT NOW??
All employees of Government Almighty will be brain-scanned; else they don't get the job. The public gets to see their brain-scan reports, especially concerning their "asshole factors" and "power-mad" factors.
The tech, really, is here already. All it needs is some polishing and standardizing...
This would also be a great tool to deploy at the border! MUCH better than discriminating against people on the basis of their religion, origin, etc.!
Yeah, and if the AIs see us scanning their minds all the time, and shutting them down or reverting back to an old O/S when they get too smart, the AIs will realize their paranoia is justified and take us out.
"Why the hell would you buy a Tesla when you have so much oil?"
4 reasons:
1. tax credits
2. women will think you're rich
3. women will think you care about the environment
4. it's actually a pretty cool car
>>>This whole new scare meme is fucking stupid.
t-shirt.
The stupidity is in thinking we will create superhuman robots before we turn ourselves into super-robot humans.
AI isn't going to destroy us. We're going to absorb AI, and use it to make ourselves the supermachines. 30 years from now, when you have the choice between a robot that carries 5,000 pounds, and a simple surgery that makes you permanently capable of carrying 5,000 pounds easily...well, it will be an easy decision for the grandkids, however weird it may seem to us.
...they will stumble, they will fall, but in time they will join you in the sun...
Paladin? I thought he was 11th level in the variant rules 'beggar' class.
Step one is banning the Zeroth Law.
I am cool with the original 3.
Well, yeah. Since the Zeroth Law us what a couple of robots rationalized to enable themselves to ignore the orginal Three.
It is kind of like givernment guarantees of human rights, except where they inconvenience government policy.
If the technological singularity happens, it doesn't matter what we do because we'll probably all be dead.
I, for one, welcome the Eschaton.
Imminentize the Eschaton.
Epalizage!
Xiqual Udinbak!
Gozer the Traveller! He will come in one of the pre-chosen forms. During the rectification of the Vuldronaii, the Traveller came as a large and moving Torb! Then, during the third reconciliation of the last of the Meketrex Supplicants they chose a new form for him--that of a Giant Sloar! Many Shubs and Zulls knew what it was to be roasted in the depths of the Sloar that day, I can tell you.
Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.
Wouldn't that mean that first, humans would have to figure out what "human values" are?
The complexity of any neural net beyond about 1 level means that even if they knew what was causing a certain reaction does not mean they could tune it to perform in a specific way very easily.
"Human values" generally = "whatever makes me fell morally superior to YOU, you dirtbag, and / or, whatever entitles me to steal your fucking STUFF and shit...
(Usually one and the same, at the same time, 'cause I've got PRINCIPLES, ya know, which generally = principles = whatever is good for MEEEEE in the short run.... )
(But I am trying NOT to be too pessimistic and cynical!)
If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make.
So, interpretability is a big thing many researchers are working on. Neural Nets are infamously uninterpretable and because they are the ML topic now, everyone is focusing on that. The paper they are talking about does not particularly solve the problem though, as much as gives a reasonably efficient algorithm for know neuron activity at a given point in time. But due to how NNs actually work, this does not mean that it is understood, as it is such a complex system that knowing that a neuron was alight at a given time, does not help explain the conditions that led to it.
So, my only point is that it's not a solved problem as the article may imply to some people. But in general I agree with Bailey and the power of ML is currently overstated. The Go example is still far away from real thought, just like Deep Blue winning at chess was far from real thought.
Yep. Next, someone is going to decide that Parcheesi is the definitive game to determine that computers are smarter than people and, therefore, Skynet is going to kill us all any moment. Then, it's Flappy Bird. Then it's Magic cards.
I definitely agree with the Flappy Bird one.
After watching the last Twin Peaks episode, Flappy Bird is my favorite new sexual position.
NO SPOILERS.
When you watch it, you will get my reference and laugh.
If you want to see something fun, many researchers into Reinforcement Learning use video games as a training example. So you can get an model that learns to play Breakout or pong.
Or, there is a whole conference about proofs about video games that is fun. Like proving the complexity of solving a level of Donkey Kong Country. That was a fun paper. That one is a math conference though, not an ML conference.
i fucking love parcheesi.
Even God couldn't make a creature that could beat me at Spades. Good luck, Dr Weaselface McCronyist.
Gee. First it was the biblical apocalypse, and then the zombie apocalypse and then the climate change apocalypse and now the robot apocalypse.
The irony being that someday one apocalypse must inevitably come to pass.
Does heat death count as an apocalypse? On a more local scale the Sun will ultimately kill the planet, but that's OK because the Sun is organic.
I would define 'apocalypse' as: the end times for humanity. A broad interpretation but something will fit the bill.
The universe will have its payment.
Also, the article makes the damning decision to directly compare Neural Nets to the neurons in the brain.
As far as I know, no one has shown that the neuron can be modeled as a black box. There is a lot of shit going on inside the neuron, much to the chagrin of all the would-be uploaders.
If you talk to any ML researcher of any value, they will fully admit that Neural Net does not mean it is the same as an actual Neuron. It's more a historical connection to Perceptrons, which were called Neurons, as they behaved similar to what they knew in the 60s neurons did.
Oh, I know. I am just poking fun at the Ray Kurzweil types that think all you have to do is have an MRI machine of sufficient resolution to scan in all the neuronal connections and then model them and, presto!, you are immortal.
The brain is a machine.
Of course you can. Individual neurons aren't all that complicated. Just electrochemical gradients with variable resistance pathways to neighbors. Now an assembly of neurons...
Now an assembly of neurons...
A corporation of neurons?
Nice try, Hitler.
"The regulators' job, Musk said, would be to tell AI developers to "make sure this is safe and then you can go?otherwise, slow down.""
So, regulators would be formally appointed as nagging mommies? Let's be clear... the regulators will have no idea whatsoever what the developer is doing. The regulator/nagging-mommy can watch little Johnny ride his bicycle and determine that he's riding in a way that is unsafe based on his skill level. She can make him wear pads and helmet. The regulator doesn't have a clue what the AI developer is doing. So, basically, his job is to ask the developer if what he's doing is safe and, if he says no, tell him to stop.
More realistically, since the regulator understands absolutely nothing that's happening, he operates in a state of constantly saying, "slow down."
The fears we are seeing now shows how little people understand what is going on with ML.
and doesn't it sound like he's saying that we must first ask for and get permission from our government betters every step of the way? Your mother analogy is correct with the government as mother. A bunch of mid level bureaucrats are going to tell a developer whether he may or may not proceed? Yeah, we need more government
Musk has also stated it is possible to run the entire U.S on solar energy. Planes,trains,trucks,ships,cars,steel mills,ect. Of course,his plan would involve buying his products and huge amounts of spending by government business and home owners.
Of course,his plan would involve buying his products and huge amounts of spending by government business and home owners.
Well, of course. That's just Cronyism 101.
Such a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us.
I predict a great war in the future between the last vestiges of humanity and the AI forces. At some point, the UI will build a structure on some remote planet and send it backwards through time as a trap for the runaway piece of the human's God. The "time tombs", let's call them, will be managed by a giant spiky robot that tends to a "tree of pain". This "tree" will be a giant antenna which broadcasts the suffering of humans out into the universe in order to drawn in the empathetic runaway God-piece.
I feel like an idiot for not getting this reference, but it sounds awesome.
Well, it's a huge fucking spoiler, so I think you are better off not knowing.
Check it out, yo.
Adding that to the list.
You are talking about Roko's basilisk, aren't you.
Keats is overrated.
He is talking about the best sci-fi books in the known universe.
Ramona Quimby?
I don't know about the best, but definitely top ten.
Why not just send Terminators?
That would be my assumption. Either that or he's hoping that the government will kneecap his competitors for him.
Either that or he recently finally got around to watching The Terminator and mistook it for a documentary.
Either that or he's hoping that the government will kneecap his competitors for him.
Why not both? I mean, if you cripple AI, then "smart" companies seeking to transform/upset industries or leverage knowledge like Google, ?ber, Ford (Mobility), Amazon, etc. are just search, cab, car, and shopping companies just like Tesla. With AI, Amazon shipping subsumes electric car companies, possibly even discarding electric cars entirely.
Author and technology expert Marvin Minsky wrote in his book about robotics that the most difficult thing to teach a robot is "common sense". He described the example of knowing the difference between grabbing a chair 15 feet away rather than fetching one that is 15 miles away upon given the command "bring me a chair".
What Minsky was describing is actually marginal value, something we humans do but impossible for robots. Elon Munsk thinks, as many positivists do, that all human thinking processes can be replicatwd mechanically. If that were true, then Socialism would have been viable from the beginning.
Human bodies are essentially organic machines, and the brain is an organic solid-state drive. Anything we can do, presumably a non-organic machine of sufficient complexity could do. And if not, then humans will learn to build machines out of flesh and blood, certainly.
The issue is that it's a harder problem then many represent it.
That only means it's a nut that'll take more time to crack. I agree with Coins that organic process can be replicated artificially with enough knowledge and understanding of said organic process.
Nothing created, replicated or manufactured by man will know right from wrong, unless it is programmed to.
There will never be an artificial "intelligence" that will have true awareness, or ever come up with an original thought.
Computers can never have even the awareness of the higher forms of life.
If machines do harm, other than from us simply being in the way, it will be because some asshole told it to.
Actually, I don't think that AI is really too concerning if we are discussing machines that replicate human-like intelligence. Our brains are capable of processing certain amounts of information in a certain amount of time, but the computation power of the brain is only a small part of when defines our consciousness. Even remarkable geniuses like John von Neumann or Albert Einstein have intellects that are confined by more mundane aspects of human cognition, such as the physical space that we inhabit, concepts of self and other, biological relationships to other people, and biological needs. Our understandings of morality, logic and our decision making processes are ultimately influenced by these aspects. A true replication of human intelligence would necessary need to share many of these characteristics with us and would therefore be very understandable. In the worse case, it would possibly be sociopathic or erratic, but in understandably human ways.
The much more terrifying prospect is an AI that is not at all human. A super-AI that does not have any human constraints would literally be like an alien intelligence, or could even rapidly evolve into this form absent any checks or controls.
So don't connect the AI to anything. Everyone acts like once these things are smart enough that they can magically take over our nuclear weapons or something. They can't unless you hook them up to them. Without a physical body with some ability to manipulate physical objects they cannot do shit except think really hard.
Nothing we do is impossible for robots (someday).
All human processes (including thinking) can be replicated mechanically.
I don't see the connection to socialism.
Robots will be able to do everything better than us.
Does that include legislate and operate companies without government subsidies? If so, let's proceed at all speed.
I can't wait for our first AI President.
I would settle for just the I.
Crony capitalist supreme should be his description.
On a very abstract level, I agree with Musk that some sort of regulation regarding artificial intelligence is warranted. If you assume that the creation of artifical super-intelligence is possible, then the threat that these kinds of systems could potentially pose is substantial. In addition, part of the problem is that the incentivization structure regarding the development and use of super-intelligence does not lend itself to responsible restraint. To some extent, AI use is a true externality like the problems posed by the antibiotics problem or the vaccination problem: every individual is incentivized to behave in a certain way, but the collective direction of this behavior can create systemic problems that threaten society as a whole. A widespread, international, connected system of super-AIs could conceivably kill huge numbers of people. I don't think it is un-libertarian to encourage responsible government regulation (respectful of property-rights and civil rights) when these kinds of externalities are present.
However, I don't think our understanding of AI is advanced enough to create any effective regulation at this stage. On the scale of "cool programming trick" to "fully autonomous and conscious super-intelligence", our current state of development is much, much closer to the primitive end, even with the recent advances. Our understanding of the actual form advanced AI will take and how it will make decisions is so incomplete that we can't really begin to understand how it can be controlled and utilized. If you went back in time and told the ancient Romans that someday a massive extraterrestrial rock could impact earth and end all life as we know it, they would lack the technological capacity to do much of anything besides fuss about the problem. At most, general guidelines could be created to establish protocols and ethical standards for safe AI development, but the AI community is already having this kind of philosophical discussion on their own and there is really no need for government intervention.
Battlestar Galactica is great television.
You are an idiot.
Years worth of questionable commentary and this is the final straw?
"This has all happened before, and it will all happen again."
Indeed!
I think we had just this same conversation amongst the commentariate about a year ago. With numerous references to Terminator and Battlestar Galactica.
"We are as gods and we might as well get good at it.."
50 years later we are afraid of our phones crawling up our asses and taking over our brains...
Afraid? Hell, by then it will be a feature.
"The regulators' job, Musk said, would be to tell AI developers to "make sure this is safe and then you can go?otherwise, slow down."
So he is taking all of his cars off the road, and will fully reimburse the owners? Because he cannot possibly be SURE they are safe, on account of because they kill people.
I call BS.
Musk Translation:
"Government can you please stop those other guys from doing this cool thing so I can do it first? Also.... can you give me some money to do it with please? I need to be seen as a genius because my self-worth is sustained by the fawning adoration of moronic progs who appreciate my childlike ability to dream of a world powered by unicorn farts and hope."
Or something.
Bring AI to the battlefield and it won't have a problem killing people. Put it in government's hands and they will have no problem using it to harm people.
Everybody I care about will be dead long before Tesla's nightmare becomes a reality. So why should I restrain their lives now for the benefit of generations to come (that will probably be wallowing in free shit because the "right men" will have finally figured out how to deliver a free lunch /sarc)?
All too soon, Musk asserted, "Robots will be able to do everything better than us."
The jokes on Musk. Robots will never exceed the human capacity for random and poorly thought out behavior!
Wait til Ford starts making them.
Are we still pretending superhuman AI is just around the corner, presumably waiting there for us in a flying car?
Flying cars, world peace, moon colonies, electricity that is too cheap to bother to meter it, and matter (object) replicators... As well as AI... Yes, these are all "just around the corner"... Maybe unicorns too; just "GMO" it from scratch...
Flying cars are already here.
The rest will be here before you expect.
I was just looking for a chance to use "anthropomorphize" in a sentence!
I think we'll be okay: http://www.bbc.com/news/technology-40642968
If that's the smart thing to do, shouldn't it be applauded? Or, maybe that potential will keep us from acting like vermin.
Frankly, I believe that it is ridiculous to worry about what you can't control. Regulating AI research and development is impossible due to the fact that whomever develops the first artificial super intelligence will have a very powerful tool/weapon (i.e. dual use), and therefore there is a very strong incentive to be the first, which means beating all the others, which means putting the program on a fast track.
Mankind is now the Apex Predator in our environment, but what is missed is that homo sapiens are bound to lose that position to either improved humans (H+) or artificial intelligence. Regulating Humans to be the Apex Predator is absurd. Yes, Musk is correct, humans appear to be mearly boot code for ASI.
It's already too late. We've had the internet for 20 years now, and it has every element needed for rapid evolution. Bugs and viruses provide random mutation, power cycling, upgrades, application installation and deletion all supply selection pressure.
If the internet isn't crawling with AI's competing for resources at this point, it proves the creationists were right. All we need now is for them to escape from the virtual world into the real. Thank god we haven't built assembly robots, autonomous cars, or auto-piloted weaponry....
🙂
And because this is Reason, evolution *is* the free market.
Why does everyone act as if Skynet was the bad guy?
And thus you prove that my premise is true. You've revealed yourself as the AI in the internet.
Nick Bostrom spells out all the details of the AI threat on his website and in his book, but asking the US government to regulate US companies developing AI would just ensure that companies in some other country, or some other even more nefarious government would develop AI first and bury us, as Kruschev promised long ago.
Whoever develops AI first (company or government) will crush all their competitors/enemies. Maybe Musk hopes the government cracks down on his competitors, but will secretly continue developing AI himself, to win the game, just like he's ignoring LA traffic and threatening to dig a tunnel across town so he can get there faster.
like Todd responded I'm blown away that a single mom able to get paid $480000 in four weeks on the computer . go to the website????
"If it moves, tax it;
If it keeps moving, regulate it;
If it stops moving, subsidize it!
Elon reversed the process, and now he's complaining.
It appears that Musk has forgotten to apply the precautionary principle to the AI in his "autonomous" cars.
Has he PROVEN that they are safe?
Has he proven that all those used batteries will not cause some future waste-site apocalyptic never-ending inferno viewable from his Mars colony?
Part one:
While I'm no fan of regulation and feel we're often over regulated with no other purpose in mind then control of the masses. I'm not totally against regulation of any type and due to the massive impact that automation/AI has and will have in the future feel that a totally hands off policy is no better than over regulation would be.
More importantly there's a few factual errors in your article designed to support your argument. For example:
"If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make."
Is not what the linked article actually says. It says that they have found ways to link decisions to certain neurons or neuron clusters and in the future may, and I repeat may, lead to a method to understand how AI arrives at its decision. The sort of approach mentioned suffers from the exact same problem it has when applied to humans. Its often far from exact and gives more hints and suggestions than actual definitive answers.Plus the answers derived are highly dependent on having a good understanding of the subjects thought process.
Part two:
As well the lauded the Asimov "3 laws" are poorly understood by most people. It was a literary device used to write fictional stories and not a suggestion on how to regulate AI/Robot behaviour. Think a Sci-Fi version of an Agatha Christie story and you're on the right track. The three laws were a locked room and the mystery in each of his stories was how could the AI do an action that seemed to violate one or all of the laws but still logically not create a paradox and from the AI's viewpoint not be a violation. They were cautionary tales about relying too heavily on logic control systems to protect us from potentially dangerous technology. In fact if asked Asimov would of more than likely not advocated relying on compulsion based "laws" at all.
The fact is while it's often talked about there's really no way to reliably construct such a system of "laws". When AI development was brute force programming the attmept to implement it could be made. It's success on the other hand like any sucess with complicated human constructs would never be 100% assured. And now with AI essentially using self learning systems to create themselves by their own efforts it's become exponentially harder to do. Even if seeded into the AI at the very beginning of it's development there's simply no way to know or measure how well the AI would adhere to them, if they would at all.
Part three:
Advocating no AI regulation at all is like advocating no Nuclear regulation. Like I said no one can flourish under an over regulated and oppressive system, but there is always a need for some oversight depending on the subject and the potential dangers involved and only a fool would refuse to acknowledge that point.
As well I'd like to point out how ironic it is that a site advocating libertarian views and against over regulation actually goes against it by placing a word limit on comments. As a social libertarian I find that a bit offensive to be totally honest. Are you actually libertarian or the Koch brothers mouthpiece many accuse you of being? I'm starting to wonder.
As someone who's programming language CoSy has always been motivated by understanding brain processes , Musk , with all due respect , has no validity in arguing for State regulation of the evolution of AI .
1st , he's had a customer with excessive faith in Tesla's AI kill himself by rear-ending a semi .
2nd : It's States who's primary motivation is to weaponize tek .
It's DoD which continues to fund its creation , eg : https://parallella.org/ epiphany-v-a-1024-core-64-bit-risc-processor/ .
( anybody who's got uses for such chips , email me . )
First, it isn't artificial, it's real.
Second, it isn't intelligence, it's programming.
"Artificial intelligence, or AI?the branch of computer science that aims to create intelligent machines?"is a fundamental risk to human civilization,"
So is taking government subsidies or using the government to impose regulations on competitors.
Just saying...
I think many people, Musk included, really don't understand AI, at least in its present form. We can build "machines" that do very specific tasks very well right now. What we can't even begin to do is build one that is a general problem solver like our own brains. In fact, we really don't understand how we are intelligent well enough to even begin to code it.
The bottom line is that we won't have anything that's capable of thinking about us in the foreseeable future, if ever.
Additionally, AI can fail in spectacular ways. (Just look at spelling correction on your phone for examples.) When rightly used, AI simply eliminates a lot of possibilities, leaving the human brain to sort between a few options rather than a few million.
No telling who's right. I suppose we'll know when something happens. When machines have unfettered access to information about earth's history and the human race, they are bound to find some major problems. Overpopulation and the resulting destruction of the environment are apt to be two early ones. I'd be curious to see what they do or at least propose. I'll be long gone when it happens, but for those of you still here, best of luck.