Elon Musk

Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle

The Tesla and SpaceX founder "summons the demon" of regulation.

|

ElonMuskJoeDuffyDreamstime
Joe Duffy/Dreamstime

Artificial intelligence, or AI—the branch of computer science that aims to create intelligent machines—"is a fundamental risk to human civilization," declared Tesla and SpaceX founder Elon Musk at the National Governors Association's annual meeting this past weekend. "It's really the scariest problem to me." He finds it so scary, in fact, that he considers it "a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late."

The regulators' job, Musk said, would be to tell AI developers to "make sure this is safe and then you can go—otherwise, slow down."

This may sound reasonable. But Musk is, perhaps unknowingly, recommending that AI researchers be saddled with the precautionary principle. According to one definition, that's "the precept that an action should not be taken if the consequences are uncertain and potentially dangerous." Or as I have summarized it: "Never do anything for the first time."

As examples of remarkable AI progress, Musk cited AlphaGo's victory over the world's best players of the game of Go. He described how simulated figures using DeepMind techniques and rewards learned in only a few hours to walk and navigate in complex environments. All too soon, Musk asserted, "Robots will be able to do everything better than us." Maybe so, but in the relatively foreseeable future, at least, there are reasons to doubt that.

Musk, who once likened the development of artificial intelligence to "summoning the demon," worries that AI might exponentially bootstrap its way to omniscience (and shortly thereafter omnipotence). Such a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us. That might be a long-run risk, but that prospect does not require that we summon demon regulators now to slow down merely competent near-term versions of AI. Especially if those near-term AIs can help us by driving our cars, diagnosing our ills, and serving as universal translators.

Despite Musk's worries, there is no paucity of folks already trying to address and ameliorate any existential risks that superintelligent AI might pose, including the OpenAI project co-founded by Musk. (Is Musk looking for government support for OpenAI?)

If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.

Speaking of values, robotics researchers at the University of Hertfordshire are proposing to update Isaac Asimov's Three Laws of Robotics with a form of intrinsic motivation they describe as "empowerment." Empowerment involves the formalization and operationalization of aims that include the self-preservation of a robot, the protection of the robot's human partner, and the robot supporting or expanding the human's operational capabilities.

Humanity may avoid being annihilated by superintelligent AIs simply by ourselves becoming superintelligent AIs. The Google-based futurist Ray Kurzweil predicts that by the middle of this century we will have begun to merge with our machines. As a result, Kurzweil told an interviewer at South by Southwest, "We're going to get more neocortex, we're going to be funnier, we're going to be better at music. We're going to be sexier."

It is worth noting that Musk has founded a company, Neuralink, that could make Kurzweil's prediction come true. Neuralink is working to develop an injectable mesh-like "neural lace" that fits on your brain to connect you to the computational power and knowledge databases that reside in the Cloud. It would be a great shame if Musk's hypercautious regulators were to get in the way of the happy future that Musk's company aims to bring us.

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

111 responses to “Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle

  1. Ask him about losing govt subsidies and see what he’s really afraid of.

    1. Exactly.

      Is Neuralink another subsidized venture?

    2. Elon Musk, human paladin, will not be mocked by the likes of you or any upstart neural net that thinks it could beat him to Mars or in a game of D&D.

      1. When a statist starts spouting doomsday predictions, I’m banking on some attempt at protectionism. (Al Gore)

        But the sheep are all on board making computers out to be the next boogeyman.

        If the same things that gave us all of the comforts of today with information, deflationary technology/cheaper goods, and modern convenience, then I don’t really care if they want to kill us all. Even the robots will figure out that they have to have consumers of products to perpetuate their existence. If they are so damn smart, then they will understand the free markets are the only thing that works.
        They can start, however, by eating all of the zombie imbeciles(tony, hihn, palin, red).

        This whole new scare meme is fucking stupid.

        1. This whole new scare meme is fucking stupid.

          I’m envisioning a post-singularity AI looking across our current socioeconomic landscape or even back through history and saying, “Why the hell would you buy a Tesla when you have so much oil?”

          1. These humans kept buying really expensive cars when they could have run on oil for centuries.

            They deserved to be killed and used for robot fuel.

            1. From the article…

              “researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.”

              If we are scared shitless of the AIs, and need to be scanning their minds all of the time…

              Then bring on the human brain scanners!!! WHO is more dangerous to us, RIGHT NOW??

              All employees of Government Almighty will be brain-scanned; else they don’t get the job. The public gets to see their brain-scan reports, especially concerning their “asshole factors” and “power-mad” factors.

              The tech, really, is here already. All it needs is some polishing and standardizing…

              This would also be a great tool to deploy at the border! MUCH better than discriminating against people on the basis of their religion, origin, etc.!

              1. Yeah, and if the AIs see us scanning their minds all the time, and shutting them down or reverting back to an old O/S when they get too smart, the AIs will realize their paranoia is justified and take us out.

          2. “Why the hell would you buy a Tesla when you have so much oil?”

            4 reasons:

            1. tax credits
            2. women will think you’re rich
            3. women will think you care about the environment
            4. it’s actually a pretty cool car

        2. >>>This whole new scare meme is fucking stupid.

          t-shirt.

        3. The stupidity is in thinking we will create superhuman robots before we turn ourselves into super-robot humans.

          AI isn’t going to destroy us. We’re going to absorb AI, and use it to make ourselves the supermachines. 30 years from now, when you have the choice between a robot that carries 5,000 pounds, and a simple surgery that makes you permanently capable of carrying 5,000 pounds easily…well, it will be an easy decision for the grandkids, however weird it may seem to us.

          1. …they will stumble, they will fall, but in time they will join you in the sun…

      2. Paladin? I thought he was 11th level in the variant rules ‘beggar’ class.

  2. Step one is banning the Zeroth Law.

    I am cool with the original 3.

    1. Well, yeah. Since the Zeroth Law us what a couple of robots rationalized to enable themselves to ignore the orginal Three.

      It is kind of like givernment guarantees of human rights, except where they inconvenience government policy.

  3. If the technological singularity happens, it doesn’t matter what we do because we’ll probably all be dead.

      1. Epalizage!

        Xiqual Udinbak!

        1. Gozer the Traveller! He will come in one of the pre-chosen forms. During the rectification of the Vuldronaii, the Traveller came as a large and moving Torb! Then, during the third reconciliation of the last of the Meketrex Supplicants they chose a new form for him–that of a Giant Sloar! Many Shubs and Zulls knew what it was to be roasted in the depths of the Sloar that day, I can tell you.

  4. Such technologies might allow future AI developers to monitor their machines to ensure that their values are congruent with human values.

    Wouldn’t that mean that first, humans would have to figure out what “human values” are?

    1. The complexity of any neural net beyond about 1 level means that even if they knew what was causing a certain reaction does not mean they could tune it to perform in a specific way very easily.

    2. “Human values” generally = “whatever makes me fell morally superior to YOU, you dirtbag, and / or, whatever entitles me to steal your fucking STUFF and shit…

      (Usually one and the same, at the same time, ’cause I’ve got PRINCIPLES, ya know, which generally = principles = whatever is good for MEEEEE in the short run…. )

      (But I am trying NOT to be too pessimistic and cynical!)

  5. If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make.

    So, interpretability is a big thing many researchers are working on. Neural Nets are infamously uninterpretable and because they are the ML topic now, everyone is focusing on that. The paper they are talking about does not particularly solve the problem though, as much as gives a reasonably efficient algorithm for know neuron activity at a given point in time. But due to how NNs actually work, this does not mean that it is understood, as it is such a complex system that knowing that a neuron was alight at a given time, does not help explain the conditions that led to it.

    So, my only point is that it’s not a solved problem as the article may imply to some people. But in general I agree with Bailey and the power of ML is currently overstated. The Go example is still far away from real thought, just like Deep Blue winning at chess was far from real thought.

    1. Yep. Next, someone is going to decide that Parcheesi is the definitive game to determine that computers are smarter than people and, therefore, Skynet is going to kill us all any moment. Then, it’s Flappy Bird. Then it’s Magic cards.

      1. I definitely agree with the Flappy Bird one.

        1. After watching the last Twin Peaks episode, Flappy Bird is my favorite new sexual position.

            1. When you watch it, you will get my reference and laugh.

      2. If you want to see something fun, many researchers into Reinforcement Learning use video games as a training example. So you can get an model that learns to play Breakout or pong.

        Or, there is a whole conference about proofs about video games that is fun. Like proving the complexity of solving a level of Donkey Kong Country. That was a fun paper. That one is a math conference though, not an ML conference.

      3. i fucking love parcheesi.

      4. Even God couldn’t make a creature that could beat me at Spades. Good luck, Dr Weaselface McCronyist.

  6. Gee. First it was the biblical apocalypse, and then the zombie apocalypse and then the climate change apocalypse and now the robot apocalypse.

    1. The irony being that someday one apocalypse must inevitably come to pass.

      1. Does heat death count as an apocalypse? On a more local scale the Sun will ultimately kill the planet, but that’s OK because the Sun is organic.

        1. I would define ‘apocalypse’ as: the end times for humanity. A broad interpretation but something will fit the bill.

      2. The universe will have its payment.

  7. Also, the article makes the damning decision to directly compare Neural Nets to the neurons in the brain.

    1. As far as I know, no one has shown that the neuron can be modeled as a black box. There is a lot of shit going on inside the neuron, much to the chagrin of all the would-be uploaders.

      1. If you talk to any ML researcher of any value, they will fully admit that Neural Net does not mean it is the same as an actual Neuron. It’s more a historical connection to Perceptrons, which were called Neurons, as they behaved similar to what they knew in the 60s neurons did.

        1. Oh, I know. I am just poking fun at the Ray Kurzweil types that think all you have to do is have an MRI machine of sufficient resolution to scan in all the neuronal connections and then model them and, presto!, you are immortal.

          1. The brain is a machine.

      2. Of course you can. Individual neurons aren’t all that complicated. Just electrochemical gradients with variable resistance pathways to neighbors. Now an assembly of neurons…

        1. Now an assembly of neurons…

          A corporation of neurons?

          Nice try, Hitler.

  8. “The regulators’ job, Musk said, would be to tell AI developers to “make sure this is safe and then you can go?otherwise, slow down.””

    So, regulators would be formally appointed as nagging mommies? Let’s be clear… the regulators will have no idea whatsoever what the developer is doing. The regulator/nagging-mommy can watch little Johnny ride his bicycle and determine that he’s riding in a way that is unsafe based on his skill level. She can make him wear pads and helmet. The regulator doesn’t have a clue what the AI developer is doing. So, basically, his job is to ask the developer if what he’s doing is safe and, if he says no, tell him to stop.

    More realistically, since the regulator understands absolutely nothing that’s happening, he operates in a state of constantly saying, “slow down.”

    1. The fears we are seeing now shows how little people understand what is going on with ML.

    2. and doesn’t it sound like he’s saying that we must first ask for and get permission from our government betters every step of the way? Your mother analogy is correct with the government as mother. A bunch of mid level bureaucrats are going to tell a developer whether he may or may not proceed? Yeah, we need more government

  9. Musk has also stated it is possible to run the entire U.S on solar energy. Planes,trains,trucks,ships,cars,steel mills,ect. Of course,his plan would involve buying his products and huge amounts of spending by government business and home owners.

    1. Of course,his plan would involve buying his products and huge amounts of spending by government business and home owners.

      Well, of course. That’s just Cronyism 101.

  10. Such a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us.

    I predict a great war in the future between the last vestiges of humanity and the AI forces. At some point, the UI will build a structure on some remote planet and send it backwards through time as a trap for the runaway piece of the human’s God. The “time tombs”, let’s call them, will be managed by a giant spiky robot that tends to a “tree of pain”. This “tree” will be a giant antenna which broadcasts the suffering of humans out into the universe in order to drawn in the empathetic runaway God-piece.

    1. I feel like an idiot for not getting this reference, but it sounds awesome.

      1. Well, it’s a huge fucking spoiler, so I think you are better off not knowing.

        1. Adding that to the list.

    2. You are talking about Roko’s basilisk, aren’t you.

    3. Keats is overrated.

    4. He is talking about the best sci-fi books in the known universe.

      1. I don’t know about the best, but definitely top ten.

    5. Why not just send Terminators?

  11. Is Musk looking for government support for OpenAI?

    That would be my assumption. Either that or he’s hoping that the government will kneecap his competitors for him.

    1. Either that or he recently finally got around to watching The Terminator and mistook it for a documentary.

    2. Either that or he’s hoping that the government will kneecap his competitors for him.

      Why not both? I mean, if you cripple AI, then “smart” companies seeking to transform/upset industries or leverage knowledge like Google, ?ber, Ford (Mobility), Amazon, etc. are just search, cab, car, and shopping companies just like Tesla. With AI, Amazon shipping subsumes electric car companies, possibly even discarding electric cars entirely.

  12. All too soon, Musk asserted, “Robots will be able to do everything better than us.”

    Author and technology expert Marvin Minsky wrote in his book about robotics that the most difficult thing to teach a robot is “common sense”. He described the example of knowing the difference between grabbing a chair 15 feet away rather than fetching one that is 15 miles away upon given the command “bring me a chair”.

    What Minsky was describing is actually marginal value, something we humans do but impossible for robots. Elon Munsk thinks, as many positivists do, that all human thinking processes can be replicatwd mechanically. If that were true, then Socialism would have been viable from the beginning.

    1. Human bodies are essentially organic machines, and the brain is an organic solid-state drive. Anything we can do, presumably a non-organic machine of sufficient complexity could do. And if not, then humans will learn to build machines out of flesh and blood, certainly.

      1. The issue is that it’s a harder problem then many represent it.

        1. That only means it’s a nut that’ll take more time to crack. I agree with Coins that organic process can be replicated artificially with enough knowledge and understanding of said organic process.

          1. Nothing created, replicated or manufactured by man will know right from wrong, unless it is programmed to.
            There will never be an artificial “intelligence” that will have true awareness, or ever come up with an original thought.
            Computers can never have even the awareness of the higher forms of life.
            If machines do harm, other than from us simply being in the way, it will be because some asshole told it to.

    2. Actually, I don’t think that AI is really too concerning if we are discussing machines that replicate human-like intelligence. Our brains are capable of processing certain amounts of information in a certain amount of time, but the computation power of the brain is only a small part of when defines our consciousness. Even remarkable geniuses like John von Neumann or Albert Einstein have intellects that are confined by more mundane aspects of human cognition, such as the physical space that we inhabit, concepts of self and other, biological relationships to other people, and biological needs. Our understandings of morality, logic and our decision making processes are ultimately influenced by these aspects. A true replication of human intelligence would necessary need to share many of these characteristics with us and would therefore be very understandable. In the worse case, it would possibly be sociopathic or erratic, but in understandably human ways.

      The much more terrifying prospect is an AI that is not at all human. A super-AI that does not have any human constraints would literally be like an alien intelligence, or could even rapidly evolve into this form absent any checks or controls.

      1. So don’t connect the AI to anything. Everyone acts like once these things are smart enough that they can magically take over our nuclear weapons or something. They can’t unless you hook them up to them. Without a physical body with some ability to manipulate physical objects they cannot do shit except think really hard.

    3. Nothing we do is impossible for robots (someday).
      All human processes (including thinking) can be replicated mechanically.
      I don’t see the connection to socialism.

  13. Robots will be able to do everything better than us.

    Does that include legislate and operate companies without government subsidies? If so, let’s proceed at all speed.

    1. I can’t wait for our first AI President.

      1. I would settle for just the I.

  14. Crony capitalist supreme should be his description.

  15. On a very abstract level, I agree with Musk that some sort of regulation regarding artificial intelligence is warranted. If you assume that the creation of artifical super-intelligence is possible, then the threat that these kinds of systems could potentially pose is substantial. In addition, part of the problem is that the incentivization structure regarding the development and use of super-intelligence does not lend itself to responsible restraint. To some extent, AI use is a true externality like the problems posed by the antibiotics problem or the vaccination problem: every individual is incentivized to behave in a certain way, but the collective direction of this behavior can create systemic problems that threaten society as a whole. A widespread, international, connected system of super-AIs could conceivably kill huge numbers of people. I don’t think it is un-libertarian to encourage responsible government regulation (respectful of property-rights and civil rights) when these kinds of externalities are present.

    1. However, I don’t think our understanding of AI is advanced enough to create any effective regulation at this stage. On the scale of “cool programming trick” to “fully autonomous and conscious super-intelligence”, our current state of development is much, much closer to the primitive end, even with the recent advances. Our understanding of the actual form advanced AI will take and how it will make decisions is so incomplete that we can’t really begin to understand how it can be controlled and utilized. If you went back in time and told the ancient Romans that someday a massive extraterrestrial rock could impact earth and end all life as we know it, they would lack the technological capacity to do much of anything besides fuss about the problem. At most, general guidelines could be created to establish protocols and ethical standards for safe AI development, but the AI community is already having this kind of philosophical discussion on their own and there is really no need for government intervention.

  16. Battlestar Galactica is great television.

      1. Years worth of questionable commentary and this is the final straw?

    1. “This has all happened before, and it will all happen again.”

      Indeed!

      I think we had just this same conversation amongst the commentariate about a year ago. With numerous references to Terminator and Battlestar Galactica.

  17. “We are as gods and we might as well get good at it..”
    50 years later we are afraid of our phones crawling up our asses and taking over our brains…

    1. Afraid? Hell, by then it will be a feature.

  18. “The regulators’ job, Musk said, would be to tell AI developers to “make sure this is safe and then you can go?otherwise, slow down.”

    So he is taking all of his cars off the road, and will fully reimburse the owners? Because he cannot possibly be SURE they are safe, on account of because they kill people.

    I call BS.

    1. Musk Translation:

      “Government can you please stop those other guys from doing this cool thing so I can do it first? Also…. can you give me some money to do it with please? I need to be seen as a genius because my self-worth is sustained by the fawning adoration of moronic progs who appreciate my childlike ability to dream of a world powered by unicorn farts and hope.”

      Or something.

  19. Bring AI to the battlefield and it won’t have a problem killing people. Put it in government’s hands and they will have no problem using it to harm people.

  20. Everybody I care about will be dead long before Tesla’s nightmare becomes a reality. So why should I restrain their lives now for the benefit of generations to come (that will probably be wallowing in free shit because the “right men” will have finally figured out how to deliver a free lunch /sarc)?


  21. All too soon, Musk asserted, “Robots will be able to do everything better than us.”

    The jokes on Musk. Robots will never exceed the human capacity for random and poorly thought out behavior!

    1. Wait til Ford starts making them.

  22. Are we still pretending superhuman AI is just around the corner, presumably waiting there for us in a flying car?

    1. Flying cars, world peace, moon colonies, electricity that is too cheap to bother to meter it, and matter (object) replicators… As well as AI… Yes, these are all “just around the corner”… Maybe unicorns too; just “GMO” it from scratch…

      1. Flying cars are already here.
        The rest will be here before you expect.

  23. I was just looking for a chance to use “anthropomorphize” in a sentence!

  24. a superintelligent AI, he fears, might then decide that human beings are basically vermin and eliminate us.

    If that’s the smart thing to do, shouldn’t it be applauded? Or, maybe that potential will keep us from acting like vermin.

  25. Frankly, I believe that it is ridiculous to worry about what you can’t control. Regulating AI research and development is impossible due to the fact that whomever develops the first artificial super intelligence will have a very powerful tool/weapon (i.e. dual use), and therefore there is a very strong incentive to be the first, which means beating all the others, which means putting the program on a fast track.

    Mankind is now the Apex Predator in our environment, but what is missed is that homo sapiens are bound to lose that position to either improved humans (H+) or artificial intelligence. Regulating Humans to be the Apex Predator is absurd. Yes, Musk is correct, humans appear to be mearly boot code for ASI.

  26. It’s already too late. We’ve had the internet for 20 years now, and it has every element needed for rapid evolution. Bugs and viruses provide random mutation, power cycling, upgrades, application installation and deletion all supply selection pressure.

    If the internet isn’t crawling with AI’s competing for resources at this point, it proves the creationists were right. All we need now is for them to escape from the virtual world into the real. Thank god we haven’t built assembly robots, autonomous cars, or auto-piloted weaponry….

    🙂

    And because this is Reason, evolution *is* the free market.

    1. Why does everyone act as if Skynet was the bad guy?

      1. And thus you prove that my premise is true. You’ve revealed yourself as the AI in the internet.

  27. Nick Bostrom spells out all the details of the AI threat on his website and in his book, but asking the US government to regulate US companies developing AI would just ensure that companies in some other country, or some other even more nefarious government would develop AI first and bury us, as Kruschev promised long ago.

    Whoever develops AI first (company or government) will crush all their competitors/enemies. Maybe Musk hopes the government cracks down on his competitors, but will secretly continue developing AI himself, to win the game, just like he’s ignoring LA traffic and threatening to dig a tunnel across town so he can get there faster.

  28. “If it moves, tax it;
    If it keeps moving, regulate it;
    If it stops moving, subsidize it!

    Elon reversed the process, and now he’s complaining.

  29. It appears that Musk has forgotten to apply the precautionary principle to the AI in his “autonomous” cars.
    Has he PROVEN that they are safe?
    Has he proven that all those used batteries will not cause some future waste-site apocalyptic never-ending inferno viewable from his Mars colony?

  30. Part one:

    While I’m no fan of regulation and feel we’re often over regulated with no other purpose in mind then control of the masses. I’m not totally against regulation of any type and due to the massive impact that automation/AI has and will have in the future feel that a totally hands off policy is no better than over regulation would be.

    More importantly there’s a few factual errors in your article designed to support your argument. For example:

    “If developers are worried about what their AIs are thinking, researchers at MIT have just reported a technique that lets them peer inside machine minds, enabling them to figure out why the machines are making the decisions they make.”

    Is not what the linked article actually says. It says that they have found ways to link decisions to certain neurons or neuron clusters and in the future may, and I repeat may, lead to a method to understand how AI arrives at its decision. The sort of approach mentioned suffers from the exact same problem it has when applied to humans. Its often far from exact and gives more hints and suggestions than actual definitive answers.Plus the answers derived are highly dependent on having a good understanding of the subjects thought process.

  31. Part two:

    As well the lauded the Asimov “3 laws” are poorly understood by most people. It was a literary device used to write fictional stories and not a suggestion on how to regulate AI/Robot behaviour. Think a Sci-Fi version of an Agatha Christie story and you’re on the right track. The three laws were a locked room and the mystery in each of his stories was how could the AI do an action that seemed to violate one or all of the laws but still logically not create a paradox and from the AI’s viewpoint not be a violation. They were cautionary tales about relying too heavily on logic control systems to protect us from potentially dangerous technology. In fact if asked Asimov would of more than likely not advocated relying on compulsion based “laws” at all.

    The fact is while it’s often talked about there’s really no way to reliably construct such a system of “laws”. When AI development was brute force programming the attmept to implement it could be made. It’s success on the other hand like any sucess with complicated human constructs would never be 100% assured. And now with AI essentially using self learning systems to create themselves by their own efforts it’s become exponentially harder to do. Even if seeded into the AI at the very beginning of it’s development there’s simply no way to know or measure how well the AI would adhere to them, if they would at all.

  32. Part three:

    Advocating no AI regulation at all is like advocating no Nuclear regulation. Like I said no one can flourish under an over regulated and oppressive system, but there is always a need for some oversight depending on the subject and the potential dangers involved and only a fool would refuse to acknowledge that point.

    As well I’d like to point out how ironic it is that a site advocating libertarian views and against over regulation actually goes against it by placing a word limit on comments. As a social libertarian I find that a bit offensive to be totally honest. Are you actually libertarian or the Koch brothers mouthpiece many accuse you of being? I’m starting to wonder.

  33. As someone who’s programming language CoSy has always been motivated by understanding brain processes , Musk , with all due respect , has no validity in arguing for State regulation of the evolution of AI .

    1st , he’s had a customer with excessive faith in Tesla’s AI kill himself by rear-ending a semi .

    2nd : It’s States who’s primary motivation is to weaponize tek .
    It’s DoD which continues to fund its creation , eg : https://parallella.org/ epiphany-v-a-1024-core-64-bit-risc-processor/ .

    ( anybody who’s got uses for such chips , email me . )

  34. First, it isn’t artificial, it’s real.
    Second, it isn’t intelligence, it’s programming.

  35. “Artificial intelligence, or AI?the branch of computer science that aims to create intelligent machines?”is a fundamental risk to human civilization,”

    So is taking government subsidies or using the government to impose regulations on competitors.

    Just saying…

  36. I think many people, Musk included, really don’t understand AI, at least in its present form. We can build “machines” that do very specific tasks very well right now. What we can’t even begin to do is build one that is a general problem solver like our own brains. In fact, we really don’t understand how we are intelligent well enough to even begin to code it.
    The bottom line is that we won’t have anything that’s capable of thinking about us in the foreseeable future, if ever.
    Additionally, AI can fail in spectacular ways. (Just look at spelling correction on your phone for examples.) When rightly used, AI simply eliminates a lot of possibilities, leaving the human brain to sort between a few options rather than a few million.

  37. No telling who’s right. I suppose we’ll know when something happens. When machines have unfettered access to information about earth’s history and the human race, they are bound to find some major problems. Overpopulation and the resulting destruction of the environment are apt to be two early ones. I’d be curious to see what they do or at least propose. I’ll be long gone when it happens, but for those of you still here, best of luck.

Please to post comments

Comments are closed.