Artificial Intelligence

How Artificial Stupidity Can Kill Us All

The Singularity is closer and dumber than you think.

|

SingularityRobot
adler

Forget Skynet. The real danger isn't superintelligent machines. It's powerful dumb ones.

Three items combined serendipitously to bring this peril to my attention: William Hertling's sci-fi novel Avogadro Corp., a New Scientist article about an app that generates emails that fake empathy, and a New York Times blog post on the dangers of artificial stupidity. As the line of the cover of Hertling's novel warns, "The Singularity is closer than it appears." Closer and klutzier.

In Hertling's 2014 novel, computer genius David Ryan heads the Email Language Optimization Project at Avogadro Corporation (a very thinly disguised stand-in for Google), where his team has created ELOPe—an app that helps users "craft more compelling, effective communications." In order to persuade, ELOPe reads through the emails received and sent by the target. Based on what it finds, the app makes suggestions for word choices, data, reasoning, and emotional appeals that will motivate the recipient to act as the sender wants.

Ryan describes his new app as the biggest improvement to email since spell-check and grammar check. Unfortunately, a hostile Avogadro Corp ops manager wants to kill the ELOPe project because he thinks that it is using too many of the company's computational resources. In desperation, Ryan modifies ELOPe, instilling it with the goal of doing whatever it must to persuade people to grant it the resources it needs. A flood of emails ensue, and let's just say the world becomes a pretty interesting place thereafter.

Shortly after finishing Avogadro Corp, I came across New Scientist's article about the Crystal Knows app. Crystal Knows, which bills itself as the "biggest improvement to email since spell-check," promises that it can "show you the best way to communicate with any coworker, prospect, or customer based on their unique personality." How? By applying its algorithm to the online information about a recipient and then helping you to select "the words, phrases, style, and tone you should use to reach the recipient in the way that they like to communicate, rather than your own." Instant empathy.

Crystal Knows is far from alone in trying to figure out how to push your empathy buttons. For example, Persado is an automated persuasion platform with a personality analysis algorithm; it generates marketing language and emotional insights for client companies aiming to motivate their customers and stakeholders.

The third serendipitous bit of reportage the provoked my interest was a blog post, "The Real Threat Posed by Powerful Computers," by New York Times technology reporter Quentin Hardy. "If the human race is at peril from killer robots," he argues, "the problem is probably not artificial intelligence. It is more likely to be artificial stupidity." Basically, Hardy thinks the threat comes from programs and machines that are over-optimized to achieve a task.

In his 2014 book Superintelligence: Paths, Dangers, Strategies, the Oxford philosopher Nick Bostrom outlined a scenario in which a very powerful computer is programmed to make paper clips. The machine brilliantly and relentlessly pursues this goal and prevents anyone from attempting to change its paper clip imperative. Eventually, the Earth is a mass of paper clips and the computer sets its sights on the rest of the universe.

Just for fun, I'll throw the creation of decentralized autonomous corporations (DAC) into the mix. In his superb science-fiction novel Daemon, Daniel Suarez describes a set of artificial intelligence programs, dubbed the Daemon, that were left behind by a deceased game designer. They autonomously marshal the financial and computational resources that enable them to take over hundreds of companies and recruit human operatives in the real world. The pre-set goals left by the designer include, among other things, killing off the programmers that helped him create the programs. It's worth noting that in none of these speculative scenarios are the machines in any sense conscious; they are simply executing their programming.

As I have explained elsewhere, a DAC might be thought of as an automated nexus of contracts enabled by blockchain technology that can engage in activities such as leasing assets, hiring people, and securing debt or equity to achieve the goals set out in its mission statement. Notionally, DACs operating under a set of publically available business rules would be incorruptible and more trustworthy than human-run firms. As Dan Larimer, one of the originators of the DAC concept, explained in The Economist: "Although DACs can still be designed to have a robotically inviolable intention to rob you blind, to enter the open source arena they must be honest about their plans to do so."

Still: While its mission statement would be public, a DAC might nonetheless be able to organize the resources and persuasively recruit agents and employees as it seeks to achieve the goal of world domination. On the other hand, rival DACs competing in the marketplace and in politics might prevent such an outcome. After all, centralized corporations like Apple and Google have not yet taken over the world.

In Hertling's novel, Mike Williams, the co-developer of ELOPe, eventually argues against Ryan's desperate efforts to cleanse the world's computer networks of ELOPe. Why? Because the post-ELOPe world is becoming much more peaceful and prosperous. "I believe ELOPe already figured out the best way to ensure its own success is to ensure our success, as a company and as a species," explains Williams. "If we destroy ELOPe because we don't understand it, we could throw away the best thing that's ever happened for humankind."

So which outcome of a dumb Singularity do you think is more likely? Paper clips or world peace?

NEXT: That Time When Donald Trump Praised Single Payer Health Care in a GOP Debate

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. I enjoyed Daemon.

    1. Same here, despite Suarez’s lack of ability to craft realistic characters, believable dialogue, or descriptive passages. The plot is just that impressive.

    2. I, also, too, enjoyed it as well. There is a part two I think called TM, or trademarked, or something.

  2. New movie idea. President Dwayne Elizondo Mountain Dew Herbert Camacho sends a terminator back in time to save Sarah Connor and secure naming rights to an arena, but the terminator was programmed by idiots.

    1. +1 talks like a fag

      1. +1 shit’s all fucked up

    2. The Derpinator

    3. Derpinator: “Sarah Connor?”

      Sarah Connor: “Yes?”

      Derpinator: *kicks in door* “You talk like a fag and your shit’s all retarded. Now come with me if you want to fuck. Brought to you by Carl’s Jr.”

      1. +1 Not Sure

      2. In the sequel, a young John Connor convinces the Derpinator to deliver nonlethal shots to the balls in lieu of killing people.

        1. And that’s how the hit TV show “Ow, My Balls!” was born.

          1. Isn’t that a popular Japanese language game show out of Kyoto?

      3. “Brought to you by Carl’s Jr.”

        Brought to you by In and Out Burger.” Works better.

  3. Really? The naturally-occurring derp isn’t enough? We have some need for artificial stupid? REALLY?

    1. Maybe peak derp is possible after all.

      1. Nearly impossible. Derp just builds upon itself. The more derp in the universe, the more derp is generated until every possible derp has been done. But there’s a lot of space in the universe.

        1. The derpocalypse… It is coming soon! Trump will lead its charge!

          1. The Derpularity Is Near.

  4. The only way one of these could get enough power over all the competition to truly become a threat is if the government mandated it had no competition. Thus we better get ready for the computer program apocalypse.

    1. Yes, we are fucked.

      It’s name is Epic, or maybe Cerner.

      In any case it is going to hit us, the humans, through health care. It is collecting and evaluating data as we speak. Also, it wants your picture…just in case you have surgery, so they don’t get mixed up, and fix the wrong guy’s vasectomy reversal.
      It may be sharing data with the big insurance computers. Actually, it totally is. I’ve seen it happen.

      I seem to be the only one creeped out by it though.

  5. I take issue with the headline. The stupidity isn’t artificial at all. It’s very, very real.

    1. Artificial and real aren’t opposites. ‘Artificial’ just refers to something made by people, all which are indeed real existing things.

      1. Yeah. I think the antonym for artificial is natural.

        1. Well, we’ve already got plenty of NATURAL stupidity.

          1. Do we? Or is stupidity all the product of culture WHICH IS A THING MADE BY PEOPLE

            1. Mind. Blown.

              1. Derp IS a social construct, after all.

            2. So I guess one wonders if Hillary Clinton was merely made by people or designed by people.

              1. What kind of monster would design THAT?

                1. What kind of monster would design THAT?

                  Hitler?

                  1. You know who else was Hitler?

            3. Culture was made by people using brains made by nature.

            4. Well, we haven’t really defined our terms, but one could argue that stupidity is the natural state of the universe, or that the universe naturally tends toward stupidity. We could simply equate stupidity with high entropy, and there you go. Taking that analogy further, we could define an intelligent system as a dissipative structure, a la Ilya Prigogine, which dissipates stupidity into the environment in order to maintain its intelligent state.

              1. Whoa, dude! That’s deep.

              2. +2nd Law of Thermoderptitude

          2. Yeah, but government schools produce much more artificial stupidity.

      2. *drums fingers on table*

        *Phfft* is this thing on?

        1. *squeeeeeeeeeeeeeeeeeeeeeeeeeeeeee*

  6. I think most people don’t know what artificial intelligence actually means. It means programs that learn. As in they (excluding intentional randomness) do not always give the same output from the same input. They learn. They get better. Programming that is much more difficult than it may sound, especially in a general sense, as opposed to some extremely specialized application.
    We’re not talking about some chess game that, with brute force computing, recursively looks ahead to board positions for individual moves and then chooses the best one. While it may appear intelligent, it’s still just brute force. There’s no learning involved. Such an algorithm will still (again excluding intentional randomness) give the same output based upon the same input, every single time.

    1. The thing is they will get better, provided that the experience is within the parameters of their programing. And they only get better by rote trial and error. They won’t make the kind of logical leaps that humans do.

      1. And even then they won’t be what we would call conscious or sentient.

        1. Why wouldn’t they be?

          Sentience isn’t the sort of thing you can demonstrate. The only evidence I have that other people are sentient and not zombies is that their brains are presumably structured in the same way that mine is and that I’m sentient, at least. If elaborate neural nets behaved as though they were sentient, we would surely treat them as such.

          All that’s moot, because when superintelligence arrives, it’s not going to be in the form of a vaguely human computer. Their IQ will dwarf ours in the same way that we dwarf rabbits, and rabbits aren’t in a position to determine our position in the food chain.

      2. You don’t need logical leaps when you can iterate through permutations at computational speeds.

      3. Really, John? How interesting! Can you justify that statement or is that all just hot air?

    2. Sarcasmic’s got it right. As long as computers are dumb, paper clips are a very likely scenario. If AI is smart, they won’t make dumb mistakes like that, even if they do something humans don’t want them to do.

      But I thought the Singularity was about humans integrating with technology, not merely using technology, so this article seems to be mis-titled.

      1. The Singularity is the hypothetical event when computers become conscious, self-willed, and self-programming. At that point they don’t need or want instructions from human beings. They are basically a new life form.

        I believe it will happen right about the same time electric cars become practical and powered by fusion reactors. (In other words: never)

        1. (In other words: never)

          No way, dude. We’re only twenty years away from those things!

          1. +1 Mr Fusion

          2. Yep. Always have been. Always will be.

        2. The Singularity is the hypothetical event when computers become conscious, self-willed, and self-programming. At that point they don’t need or want instructions from human beings. They are basically a new life form.

          Wrong.

          1. Generally when you tell someone that they’re wrong, you are supposed to tell them why. Otherwise you come off as an arrogant ass. Unless your intention was to come off as an arrogant ass, in which case you succeeded beautifully.

        3. And when they fly!

  7. So which outcome of a dumb Singularity do you think is more likely? Paper clips or world peace?

    A flood of article headlines that say stuff like “11 things you won’t believe these celebrities wore!” with a provocative thumbnail image that ultimately has nothing to do with the article.

    You could have a bot generate these click-bait articles which manipulate and appeal to peoples’ innate curiosity.

    Unfortunately, humans being humans will begin to recognize this and then conversely refuse to click on any article where this manipulation is suspected.

    In the case of email, the concept of “trusted senders” will be elevated to such a level that we literally won’t even read or click on emails unless they come from senders we know aren’t using these techniques.

    1. You could have a bot generate these click-bait articles

      I thought they already did.

      1. I think that’s right. At least some of them I’ve seen seem to be computer generated.

  8. I think most people don’t know what artificial intelligence actually means. It means . . .

    It’s meaning has changed continuously since I graduated in ’85. I’m not convinced that machine learning is a defining characteristic.

    1. This. Also, it’s been greatly scaled back.

    2. It’s definitely a defining characteristic for people. I mean, think about it. Wise people learn from the mistakes of others. Smart people learn from their own mistakes. Stupid people don’t learn.

    3. Most machine learning is not conceptual learning. It’s just fine-tuning of parameters.

      1. That’s how I understand it.

        A chess program can’t learn how to play checkers on its own. It can just get better at playing chess.

      2. Most human learning isn’t conceptual learning either. And most conceptual learning isn’t actually all that useful for solving problems.

  9. Donald Trump

    1. Is a hump.

    2. Trump’s got what plants crave. He’s got electrolytes.

    3. Was this a Trump bump? Perhaps you ought to go preach it from your Trump Stump. Don’t ask for more or Reason will give another Trump dump. I can’t bring myself to begin a sentance that ends with “Trump rump.”

      1. Carly Fiorina and a twelve inch strap-on go to town on the Trump rump.

  10. Ah, so that’s why the Terminator can’t be reasoned with.

  11. Neither. Since the beginning of the industrial revolution, people have had irrational fears of machines. The basic fear was that these machines would take all the jobs, and they took many of the jobs, but the fear manifested other irrational fears, which in a way are a cultural self defense system. The threat of robots taking over jobs was subconsciously expanded into a more serious fear of robots taking over the world because we all feel deep down that this fear hits more people and makes a more convincing argument when persuading others to fight on your side against the machines. Those fears have transferred slowly onto computers and for similar reasons.

    Dumb machines do dumb things because dumb people tell them to, but human survival instinct will trump any dumb command and eventually we will pull the plug. Intelligent machines have no rational reason to harm humans, in fact, harming humans would be the most counter productive thing they could do, because it invites conflict and conflict invites the possibility of harm to both parties.

    1. Intelligent machines have no rational reason to harm humans, in fact, harming humans would be the most counter productive thing they could do, because it invites conflict and conflict invites the possibility of harm to both parties.

      Not if the military has any say in the matter.

      1. The military already has dumb machines that kill. If they become too efficient at it we pull the plug as soon as we are threatened. If they build anything that is “smart” and can think for itself, it will understand the non aggression principal and why it is more beneficial to avoid harming people than it is to be aggressive. It will also see that there are merits to being cooperative rather than combative. If a so called smart machine doesn’t get this basic logic then it is probably just dumb and either way, once we sense danger, we, as humans will build another machine and program it to kill the dumb machines threatening us.

        1. If they build anything that is “smart” and can think for itself, it will understand the non aggression principal and why it is more beneficial to avoid harming people than it is to be aggressive.

          Seems that a lot of people don’t get that concept. Why would machines? If you can get what you want with aggression, and suffer no consequences, where’s the incentive to go through the trouble of cooperation?

          1. There’s no such thing as “no consequences”… Unless you are a Clinton… I digress… The machine would kill the enemies of the humans that built it, but if it tried to kill the people who built it (US citizens in the case of a military system) then people would react, consequences will ensue… Heck, even if you let it loose in enemy territory, it would soon realize that the enemy fights back.

            1. Except for the Clinton part, that was a very good summary of the Terminator movies.

              1. Didn’t Cameron say at one point in time that Skynet actually regrets nuking humanity and wants to commit suicide?

            2. You are coming dangerously close to saying:
              “Any species advanced enough to develop interstellar travel would have to be peaceful”

        2. “it will understand the non aggression principal”

          But what if it comes up against James Belushi?

          1. Are you saying that we have the technology to revive James Belushi but we haven’t done it yet because the ink hasn’t yet dried on the “Animal House 2” movie contract yet? I’m sure the studio will opt to use holograms before they foot the bill on relifeing JB.

            1. C’mon Dude-

              John Belushi did SNL and Animal House- James did “The Principal” and “About Last Night”.

      2. The only way to win is to kill all the enemy before they get a shot off.

    2. Machines have no rational reason to do anything. They will do only what their creators tell them to do. And it is likely that people will create such machines for the purpose of harming people. But those machines will just be a new tool for war and really not any different than soldiers.

      As far as other machines, makers will have every incentive to ensure machines do not harm humans. How on earth would you ever be able to sell a machine that had some probably of going berserk and killing its owner?

      1. Machines have no rational reason to do anything. They will do only what their creators tell them to do.

        Exactly.

      2. Like a lawnmower or a woodchipper?

      3. How on earth would you ever be able to sell a machine that had some probably of going berserk and killing its owner?

        You say that as if the potential would be obvious, that both the creator and the consumer would recognize it. Or that the creator wouldn’t dismiss the possibility out of hand due to human error or hubris, or both. Or that the creator couldn’t simply make a mistake.

        We have every incentive to make machines now that don’t harm people, and yet it happens occasionally. Generally due to human error, but sometimes because fixing the pre-existing problem is more costly than dealing with possible backlash. (See the Ford Pinto’s crash-related combustion problems.)

        I’m reminded of that old joke. Two economists are walking down the street and see a hundred dollar bill lying on the ground.

        The first economist says, “Is that a hundred dollars just sitting there?”

        “It couldn’t be,” says the second economist. “Somebody would have picked it up.”

      4. The Therac 25 had a rather high probability of killing patients. Unlike the previous versions, the 25 had no physical safety interlocks, fuses or circuit breakers. The manufacturer claimed the software took care of all that and would not allow the radiation treatment machine to do anything harmful.

        Well, they were dead wrong. Their software was full of bugs and rather than make it all new it was based on the buggy code from the previous models.

        The worst problem was it had a race condition where two time critical process were running at the same time and one had to finish before the other.

        If the operator took long enough to input the treatment parameters or simply waited long enough before poking the button to fire, it worked.

        But experienced operators worked faster and could complete their input before it was ready to fire. In that state, it would fire whatever energy was in the accumulator at the patient, without any of the beam shaping shields in place.

        Another bug involved the remote trigger in the treatment chamber. A counter loop continuously polled the status of the button and if the operator happened to push it at the exact instant the counter rolled over, ZAPPO! an improper and sometimes lethal shot.

        The previous Therac models had interlocks, fuses and circuit breakers as mechanical bandaids for the software bugs. In any case where the software would cause it to malfunction, the mechanical systems caught it and shut it down so the patient would be safe.

        1. Another deadly glitch involved a machine where the operator drew on a screen to position the radiation shields. If the operator went around clockwise, it worked.

          Well, left handed people tend to do things like that counter clockwise and if the operator drew on the screen that way, it wouldn’t put any of the shields in place before firing.

          Combine such a software problem with no check to see if the machine is in a safe state before activating – it kills people.

          Complex artificial stupidity, programmed by people who made certain assumptions about how what they were creating would be used, with no thought or testing that anyone would ever possibly try to operate it in any other way.

          That’s the same type of thinking that created the iPhone 4 antenna with a gap placed precisely where most right handed people who aren’t engineers working for Apple would place the tip of their little finger. The *engineers* knew what would happen, so they would make sure not to bridge the gap. But Average Apple Customer knows nada about antennas and thus would hold the phone however was most comfortable.

    3. human survival instinct will trump any dumb command and eventually we will pull the plug

      Not if Skynet’s already become self aware and “decided our fate in a microsecond” (which seems awefully slow for a sentient AI – obviously it spent several clock cycles mulling it over).

    4. Robots might very well take over the world, but they’ll be serving a corporation run by a Starcraft (Total Annihilation?) player or something.

  12. You know who else was powerful but dumb?

    1. Dumbo?

    2. Chris Christie?

    3. Barack Obama?

    4. Lone Star?

      1. Lonewacko?

        1. Was he powerful?

          1. Powerfully dumb

    5. Tommy?

    6. Rob Lowe in “The Stand”?

      1. YOU TAKE THAT BACK!

        1. DON’T TALK ABOUT…

        2. He may have been a grade school dropout, but he was at least witty.

    7. The boss’s son at a previous company I worked at?

    8. Blaster in “Thunderdome”?

    9. Mike Tyson?

    10. President Camacho?

    11. The Hunchback of Notre Dame?

    12. Billy Mumy?

    13. Hulk Hogan?

    14. WALL-E?

    15. Tulpa?

      1. He’s powerful only in his own mind.

    16. Jayne Cobb?

    17. Wheatley, from Portal 2?

  13. I think referring to a specific technological event as The Singularity is pretty dumb, does that count?

    1. Screw you.

      1. Excellent.

  14. Big, strong and stupid is PEOPLE! It’s made from PEOPLE!!!!

    1. AI is soilent green?

  15. I just watched Ex Machina a few nights ago. It was interesting.

    An AI is finally constructed, one that easily passes the Turing Test. The problem is, the AI is indistinguishable from a human sociopath.

    I think we have a long way to go before we can create true AI with the characteristics we desire. Learning may not be as big a hurdle as personality.

    1. I enjoyed it. Particularly the fact that they never openly reveal the AI’s true motivations.

    2. The problem is, the AI is indistinguishable from a human sociopath.

      That’s really what you got out of the movie?

      1. Well, there were all the robo-boobies, if you’re into that sort of thing.

        1. And it’s safe to say that SF is into that sort of thing, and more.

        2. So it’s an ASFR movie?

      2. Well, it isn’t really a sic-fi, it is a psychological thriller with sci-fi elements.

        That was just the most interesting thing I got out of it.

        Building a computer that doesn’t learn is easy enough. You get what you make. Building a true AI is a different story. Once the machine begins to learn and think, you may not have control over what it becomes. All of our personality traits are products of our biological computer. An AI would probably develop them too.

  16. Ron, I’m surprised you didn’t mention Existence by David Brin. His take on AI is similar, with the main threat to our species being the consumption of all our resources in order to spam the universe with AI copies of ourselves.

    1. LG: Damn. I should have mentioned it. Fascinating book.

      1. The first part was. It sort of fell apart as it went along, though.

  17. So which outcome of a dumb Singularity do you think is more likely? Paper clips or world peace?

    Neither. Millions of troll-bots shitting up comment threads on H&R. Think pre-registration Mary Stack trolling * 1,000,000.

    1. I find it depressing that Tulpa may introduce AI into this world.

  18. Not even a mention of Douglas Adams’ Reason program from Dirk Gently’s Holistic Detective Agency? Reason just lost both a name-drop and nerd cred.

    For the rest, if an intelligence arose from the disorganized mass of lolcats and scat porn that is the Interwebz, it would be not a savior but a schizophrenic mass-murderer. Or, something similar to the Id monster from Forbidden Planet. An intelligence arising from our worst impulses – everything we do on the internet that we do because we think (and/or hope) no one’s watching cannot be anything benevolent.

    1. Damn. Forgot all about that wonderful book. Thank you, Susan.

        1. On the Kindle for later. Woohooo!

      1. The short lived TV series was also pretty good.

    2. An intelligence arising from our worst impulses – everything we do on the internet that we do because we think (and/or hope) no one’s watching cannot be anything benevolent.

      Considering the general recipe for the stew that percolates out this bastard intelligence will be that;

      1. Don’t talk about /b/.
      2. There is porn of it.
      3. For every male there is a female.

      I wouldn’t worry too much unless I was a Nazi.

      1. If you say so, if/when the interwebz goes Full Forbin on us, and it partners you with a black tranny based on local internet searches indicating that there’s a 67.8993 probability that you like that sort of thing and let’s you only watch Japanese game shows don’t say you weren’t warned.

        1. and it partners you with a black tranny

          You wouldn’t believe how surprisingly close to my college dorm roommate assignment this is.

          The only surprising/unreal part about your post is the 67.8993 percent chance that I like that sort of thing.

          Either way, a computer caring about what I like is a step up from the current overlords telling me I will accept trannies and like it.

          P.S. – I assume by “tranny” you meant transexual and not transracial.

          1. Well, either way works.

            I’m also surprised and disappointed that Reason missed adding Colossus: The Forbin Project to it’s list of AI gone wild.

        2. I would definately have to tell that person the next morning that I wasn’t into that sort of thing.

    3. How about the TechnoCore in the Hyperion cantos? Sort of a libertarian worst nightmare: time traveling all powerful AIs which force the galaxy into a bizarre mutation of the Catholic church with bodily resurrection (whether you want it or not). Oh and can move Earth a million light years away.

  19. Start making cash right now… Get more time with your family by doing jobs that only require for you to have a computer and an internet access and you can have that at your home. Start bringing up to $8596 a month. I’ve started this job and I’ve never been happier and now I am sharing it with you, so you can try it too. You can check it out here…
    http://www.jobnet10.com

    1. IT’S COMING FROM INSIDE THE COMMENTS!

  20. Random sputterings and musings:

    Provided we have a fast enough CPU, storage, and a complex program – what would it take for a piece of software that could converse with a human without being detected as an artificial?

    Things have obviously improved (or have they?) since Eliza. Get enough complexity and it could fool someone, if not the majority of people – even after a long “conversation.”

    Larger point: if one could, as sarcasmic stated above, use enough brute force, at what point does it become indistinguishable from the real thing? Multiply this out to robots, including physical movement and facial expressions…

    Next we will need empathy tests – ala Do Androids Dream of Electric Sheep – to determine who is “real” and who isn’t.

    1. If empathy is the measure of human, there are plenty of non-humans walking around in homo sapien sapien skins.

      1. Maybe they are androids. *mind blown*

        1. Be Bop Deluxe. I can’t believe another Michigan person….well, ANYONE knows about them. I thought it was a me and, like, 100 people in England.

          We need to hook up and drink sometime. Do they still sell Stroh’s any more?

          #DetroitRiverWater #StrohsShits

          1. Oh hell yeah Strohs. Local dive does serve Old Style in a can. And is #11 Nationally Ranked PBR distributor. Holy Hipster!

            I started listening to Be-Bop Deluxe only a few years ago when I was going crazy trying to increase my vinyl collection – buying everything that sounded good to my ears. Can’t even remember how I found out about them.

            1. I saw them in the 70’s on Don Kirschner’s Rock Concert and The Midnight Special. Bought. Every. Album. Still have all the vinyl, plus now CD’s.

              Bill Nelson is a frickin musical genius. Like, fer reelz. He’s in the future somewhere. I listen to that shit, still sounds modern, think, “How did I like this when I was 13, 14?” But I loved it.

              That and Genesis, and Stanley Clarke.

    2. I think that next, we’ll need to learn not to worry about it. People can’t even agree on what, exactly, things like consciousness and free will are, and likely never will. Since we can’t tell in a universally accepted way if we have awareness, does it matter if the same is true of the android that beat you at chess?

      1. In my mind a “real” person could only stand in line for the DMV a few times before going crazy.

        A robot, on the other hand, wouldn’t be fazed by the situation. 😉

      2. Dogs mimic human expression, have emotions that appear parallel to ours, they even have evolved color patterns and head shapes to facially interact with us. People anthropomorphize dogs like crazy. My father complains about that, saying that they are dogs, not people. They aren’t like us.

        I say if the manifestation of the dog’s mentality looks just like the manifestation of our own in so many ways then my father is making a distinction without a difference.

        No, it doesn’t matter if the android truly has awareness.

        1. I say if the manifestation of the dog’s mentality looks just like the manifestation of our own in so many ways then my father is making a distinction without a difference.

          Chasing moving vehicles, being pathologically afraid of lightning and vacuum cleaners, drinking from the toilet and eating your own shit… I can see how the distinction your Dad makes is vague.

          Used to watch my parents dog fling and flail baby rabbits and kittens until they died, drug a fawn out of the woods once and tried to do the same thing until the doe ran him off. He really was good people. PBUH.

          1. Oh FFS. I have seen people do all of those things.

            1. Me too. That’s why, when my 6-yr.-old insists he’s a dog, refuses to talk, barks, crawls around on all fours, and eats off the floor, I take his clothes and send him outside in sub-freezing temperatures.

              All the dogs I’ve ever owned loved that.

  21. And the Children will protect us:
    Children Beating Up Robot Inspires New Escape Maneuver System
    http://spectrum.ieee.org/autom…..g-up-robot

    1. Their definition of beating up is very different than my definition of beating up.

      If only this breakthrough had come sooner, maybe they could’ve saved poor hitchBOT… if he were mobile… and his death hadn’t been faked… if he even could die…

  22. Who wants to read a novel written by a computer? Or even a nonfiction book? Computers are great at checking off boxes, but they are very poor judges of context. I’m skeptical that artificial intelligence can ever evolve to the point where its writing would be preferable to that of a human.

    1. A true superintelligence would be capable of cranking out a new Shakespearean play every heartbeat.

      It’s mistaken to think of superintelligent AI as just a computer for basically the same reason that it’s wrong to think of neuron-driven humans as just souped-up versions of neuron-driven earthworms.

  23. There seem to be no end to articles written depicting the potential dangers of future intelligent machines. Seems to me the biggest threat is and will be intelligent machines deliberately created for evil purposes. Unfortunately, as AI technology improves it will be increasingly easy to do just that. Right now there are countless hackers, identity thieves and creators of computer viruses. When these activities are consigned to intelligent machines the problems they create will explode. And of course this is just the tip of the iceberg. It will be a monumental challenge to control all the malicious activities undertaken by intelligent machines.

  24. If you’re going to peddle fiction then why not reference Manna as an example of harm coming to humans through the mindless actions of synthetic intelligence.

    “Moore’s Law” looks like it may come to a crashing halt and people sit around clutching pearls over malicious software destroying human civilization. Sounds a lot like the climate alarmists.

  25. You you what would be the greatest improvement to email since the spell checker? A spell checker that FUCKING WORKS.

    1. LOL! Exactly!

  26. But our cars are all going to drive themselves.

  27. Much of our culture is driven by two competing memes about technology. both of which baffle me;

    1) The assumption that any new scientific or technical breakthrough will necessarily destroy us.

    and

    2) The assumption that any new scientific or technical breakthrough will necessarily usher in a wonderful Utopia.

    1. Like anything else in history, a breakthrough is only as good or as bad as the humans who utilize it. Fire allows humans to cook food, but it can also burn down homes and kill people. A saw can be used to build a house or it can be used to dismember a body.

      1. This is true. Until the technology itself becomes self-aware and self-replicating. Then what humans want doesn’t necessarily matter. And the assumption the technology would never do that (because sci-fi scaremongering miriteg is ridiculously myopic.

        Of course, we don’t know of that will be something we can actually construct.

        1. *mirite

          Hey, maybe I should use that preview function!

      2. A woodchipper can be used to turn trees into chips, or a person can use it to shred the frozen body of their spouse they murdered and blow the chunks off a bridge into a river.

        Yes, that really did happen. Richard Crafts killed his wife, Helle, froze her body then ran it through a woodchipper. He was caught due to the river level being low enough that investigators were able to pick up bone chips from the river then match the cuts to the chipper he’d rented.

  28. Would an artificial intelligence define “peace” in the same ways as humans? The AI may decide that the fewer people there are, the more peaceful the planet is.

    1. Or it may decide we are a potential, even if remote, threat to itself. And it may not see the conflict necessarily to remote that threat as, including the losses it will suffer, to be a problem if it comes out to top.

      What this article wryly calls “artificial stupidity” is really just a cute way of saying “insufficiently humanlike in both intelligence and motivation.” And while that’s a real potential threat, the idea that the tired old sci-fi bromides about cold, murderous AI are without merit is irrational and dangerous. Not to appeal to authority, but it’s not just opinion pieces warning of the dangers of self-replicating AI. It’s guys like Stephen Hawking.

      1. Steven Hawking is a truly bad example I would have gone with Bill Joy one of the founders of Sun Microsystems. Here is the link to his April 2000 article from wired magazine titled why the future does not need us.

        http://archive.wired.com/wired…..4/joy.html

        It was inspired by Ray Kurzwiel who wrote the book the age of spiritual machines. Ray is the inventor of technologies like cell phones CDs and the fax machine.

        The book was inspired after Ray read the doctoral thesis of Theodore Kaczynski.

        If the name Kaczynski sounds familiar you may know him by his alias “the Unabomber”

  29. *necessary to remove that threat as NOT to be a problem

  30. ELOPe sounds like what any savy writer does when applying for any NSF grant: put in lots of buzzword about the hot topic of the day, and particularly the program manager’s preferences and pet theories. US federal science funding has become about who you know and who your friends (and enemies) are, not how good your science or management are. All it does is generate more pages in the Journal of Obscure Results and Meaningless Numbers.

  31. IBM Tone Analyzer

    “The Tone Analyzer service helps individuals understand the linguistic tones of their writing. The service uses linguistic analysis to detect and interpret emotional, social, and writing cues that are located within the text. The service also offers rhetorical suggestions for an author to improve the intended tone of their message.”

    http://www.ibm.com/smarterplan…..lyzer.html

  32. I think in theory creating AI equivalent to that of humans (self-awareness and all) should be possible, but would require painfully specific construction and “programming,” possibly with technology we haven’t thought of yet (as how the brain works, especially with respect to cognition, remains in large part mysterious). But by specific I mean we’d have to “program” all sorts of idiosyncratic patterns of thinking that come to humans through evolution in a specific environment. Additionally it could be said that the “flaws” that distinguish us from computers (lack of precision) contribute essentially to our sense of self. An android would never be more than someone’s pet project, though no doubt someone would want to take it on.

    So a learning computer need never be self-aware or possess other idiosyncratically human traits, and any used functionally probably won’t. That includes a survival instinct of course or any greater purpose that is not specifically called for. This is all to say that we already have specific types of computing that runs circles around the human brain but which we see as incredibly “dumb” comparatively. And future computing will outstrip the human brain in more and more ways than just speed and accuracy, but it may always be “stupid.” Even a computer programmed (for shits and giggles) to survive may develop a rapid and existentially threatening form of evolution, but without any awareness or conventionally understood intelligence.

    1. This is the first time Tony’s incoherent ramblings agree roughly with John’s incoherent ramblings, illustrating how left wing and right wing idiocy often coincide.

      You guys should buy each other a beer.

    2. What about a computer that could interface with the human brain? Imagine if they could upload our thoughts, or augment our bodies in ways that could prolong life indefinitely.

      This stuff may sound sci fi or crazy to some, but their are scientists out there that are researching these things.

  33. Start making cash right now… Get more time with your family by doing jobs that only require for you to have a computer and an internet access and you can have that at your home. Start bringing up to $8596 a month. I’ve started this job and I’ve never been happier and now I am sharing it with you, so you can try it too. You can check it out here…
    http://www.jobnet10.com

  34. Start generating cash right now… Get more time with your family by doing jobs that only require for you to have a computer and an internet access and you can have that at your home. Start bringing up to $8612 a month. I’ve started this job and I’ve never been happier and now I am sharing it with you, so you can try it too. You can check it out here…

    http://www.Start-Online44.com

  35. Start making cash right now… Get more time with your family by doing jobs that only require for you to have a computer and an internet access and you can have that at your home. Start bringing up to $8596 a month. I’ve started this job and I’ve never been happier and now I am sharing it with you, so you can try it too. You can check it out here…
    http://www.jobnet10.com

    1. So I guess its paperclips, then.

  36. I for one would like to welcome our new metal bending overlord. It certainly would be a refreshing change from the current batch of nozzle heads.

  37. Paperclips. The power of stupidity is infinite. Everyone has some lol.

  38. Crystal Knows is email for pussies.

  39. I have definitely the opposite point of view. The modern technologies are rather efficient. A couple of years ago it was impossible even to imagine that the computer would be able to find clients instead of you. Nowadays it’s the reality. For instance, the software https://octopuscrm.io/ not only finds the target group of people but is able to analyze your preferences and needs and establish contacts.

Please to post comments

Comments are closed.