Morality

Too Smart for Our Own Good

Will cognitive enhancement destroy the human race?

|

Could using technology to enhance our cognitive functions make people too smart for our own good? The problem, as Oxford University philosophers Julian Savulescu and Ingmar Persson see it [PDF], is that enabling people to become smarter via drugs, implants, and other biological (or genetic) interventions will speed up scientific and technological progress which in turn will increase the ability of smart evil people to make and deploy novel weapons of mass destruction.

"We may not have yet reached the state in which a single satanic character could eradicate all life on Earth," they rather dramatically write, "but with cognitive enhancement by traditional means alone, we may soon be there." The only thing we have to fear is ourselves. 

"This growth of knowledge will be instrumentally bad for us on the whole, by unacceptably increasing the risk that we shall die soon," they argue. "It will be bad for us that scientific knowledge continues to grow by traditional means, and even worse if this growth is further accelerated by biomedical or genetic enhancement of our cognitive capacities." In other words, it's already bad that our species has become so smart, but speeding up technological progress poses an even greater existential threat to humanity. Specifically, they worry that making people smarter will enable the creation of things like ever cheaper nuclear bombs or more potent weaponized pathogens.

As Harvard University philosopher Elizabeth Fenton observes, "Although Persson and Savulescu stop short of concluding that we should stop pursuing scientific progress altogether, their argument suggests that this would be the prudent step to take." We're doomed unless we return to the era of bearskins and stone axes.

Savulescu and Persson however acknowledge, "Some may want to object that sufficient cognitive enhancement by itself will produce the moral enhancement required to avoid the misuses of science and technology we have indicated." They accept that traditional means, chiefly the formalization of science and education, have already cognitively enhanced humanity. They then make the claim, "It is obvious that moral enhancement by traditional, cultural means—i.e. by the transmission of moral instruction and knowledge from earlier to subsequent generations—has not been anything like as effective and quick as cognitive enhancement by these means." There is some good evidence for us to doubt their factual claim.

Let's take the long-term increase in income per capita in some areas of the world as a proxy for scientific and technological progress. Economist Angus Maddison has calculated [PDF] that in Western countries since the year 1,000 that incomes increased from $426 per capita to $25,399 in 2006, nearly a 60-fold increase. What happened to the rate of violence measured as homicides? According to British criminologist Manuel Eisner, murder rates have also steeply declined [PDF] since the Middle Ages. For example, murders fell from annual rate of 24 per 100,000 in England in 1300 to 0.6 per 100,000 today, a 40-fold decline.

Consider also that there is evidence that average human intelligence has been significantly enhanced in recent years. Specifically, University of Otago (New Zealand) political scientist James Flynn discovered that average IQs in 30 countries have been steeply rising in the 20th century. How steeply? Americans gained about 22 IQ points over the 70 years between 1932 and 2002. At about the same time, deaths from warfare around the world have been declining. The 2009-2010 Human Security Report notes that in the 1950s there was an average of six international conflicts being fought around the world each year; in the new millennium the average was less than one. Even more happily, the number of civilians killed by organized violence in 2008 was the lowest since data started being collected in 1989. At least so far, the evidence does not suggest that a general increase in overall intelligence (cognitive enhancement) leads ineluctably to greater violence.

In any case, Savulescu and Persson see one possible way out: moral enhancement. More research, they argue, should be directed toward to figuring out how to make people more altruistic and less aggressive. As examples of how research might enhance morality, they point to neurological findings that suggest that tweaking some brain chemicals like oxytocin and selective serotonin reuptake inhibitors already promote trust and increase cooperation. Once developed, they argue that "safe, effective moral enhancement would be compulsory."

In the end, Savulesu and Persson impale themselves on the horns of a dilemma when they acknowledge that "we are in need of a rapid moral enhancement, but such an enhancement could only be effected if significant scientific advances be made." Rapid moral enhancement requires the exact same scientific progress that is allegedly leading us toward the possibility of ultimate destruction. That seems to me to be an argument for going full steam ahead.

Ronald Bailey is Reason magazine's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

NEXT: Judge Rejects Eavesdropping Charges for Recording Police

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Should technological progress be stopped until moral progress catches up?

    Since my own moral progress has been caught up for some time, I’ll say “no”.

  2. Compulsory “soma”? Fuck off technocratic douchebags.

    1. It’s not soma, it’s significantly creepieer. They want to drug us to make us more like what they want. Sort of an Ebanezzer Scrouge before-and-after drug. Our personality isn’t convenient to their purposes so it needs to be changed.

  3. “could enable a morally corrupt minority?perhaps even a single satanic figure?to destroy the entire human race.”

    Not to scare you or anything, but nuclear weapons have the potential to do just that. And the people who invented those things were very smart.

    So it’s possible that smart people could, indeed, cause the end of the world.

    1. begs the question – is it “smart” to end the world?

      1. Smart, but unwise and evil.

        I shouldn’t have to remind anyone here that someone can have a high Intelligence and a low Wisdom, plus an Evil alignment.

        1. Lawful evil is the worst kind of evil.

          1. They certainly make better plans than Chaotic types.

          2. Fool! Neutral Evil is called True Evil for a reason. A Neutral Evil character does not engage in evil for evil’s own sake (Chaotic Evil) and is not bound by rules or customs (Lawful Evil). A Neutral Evil character can say anything and do anything that furthers their own purposes.

            1. Mr. President?

              1. Close. I’m a Yugoloth.

        2. Everyone should read, “The Sociopath Next Door”. Imagine being sightless, deaf. . . then imagine having no conscience. The very smart sociopath can end everything. It is only a game.
          Look to our politicians and lazy brother-in-laws for examples.

    2. Could it be….hmmmm….SATAN?

    3. You completely ignore the fact that such intelligence will allow us to colonize our own solar system at a rapid rate, making the destruction of Earth irrelevant.

  4. If I could get me some of that cognitive enhancement, I could probably finally get my doomsday machine to work right.

  5. I’m The Smartest.

    Just sayin’.

    1. I KEEL YOU, BLASPHEMER!!!

      1. But, since I made you both up, does that mean I’m actually already the smartest?

        1. ** tsk tsk tsk **

          Fool!

        2. …..does that mean I’m actually already the smartest?

          Please…..”man” has elected George W. Bush and Barack Obama in succession.

          Argument about who is smartest over.

          1. You created both of them.

    2. “I’m a man…of wealth and taste…”

      1. Next time you’d better ask permisson before you introduce yourself.

  6. “some brain chemicals, etc. … “safe, effective moral enhancement would be compulsory.””

    Didn’t they try that in the *Firefly* movie?

    1. yes, some sociopath who has clawed their way to the top of the political power forcing everyone, under the threat of imprisonment or execution, to take drugs making them sheepish and compliant and altruistic, while the sociopaths don’t take the drugs — what could possibly go wrong with such an excellent plan?

      1. Massive die-offs from the stray bullets shot by people trying to do tai-chi while firing guns?

        1. The first gunkata reference I’ve seen in years. Gather your award, please.

  7. An alternate suggestion might be to simply legalize mj and x and just see how the works out.

  8. This is Ceti Alpha 5!!!

    1. Five times my strength and yet I still beat the shit out of you with a plastic lever.

      1. Twice your intelligence and it never occurs to me that you’re leading me into that nebula to launch some kind of trick attack.

        1. Khan really wasn’t very smart.

          If he was smart, when they rescued him off the Botany Bay he would have played nice until he could get a ship of his own without directly taking on an interstellar military right out of the gate.

          Dumbass. Gotta be smart enough to know when to play dead, dude.

        2. Actually Khan knew it was a trap, I just taunted him to the point where his pride would not let him give up.

  9. That was awesome. These two propose artificially enhancing the entire population, by force no less, into a sort of parody of Wells’ Eloi. I’m sure they’re accounting for the possibility that someone, somewhere, might escape treatment? Or that someone would have to be in charge of making sure it was actually administered in the first place?

    Either this is a joke, or these two have begun the program already, using themselves as guinea pigs.

    1. This was actually the plot of an early Heinlen book. Everyone got turned into sheep and the bad-ass rednecks came down out of the mountains and took over. Can’t remember the name, but legalized dueling was involved.

    2. It’s obvious Bailey hasn’t started using cognitive enhancing drugs.
      The idea that alterism is morally good is just touchy-feely bullshit.

  10. The pleasure of wiping out you meat bags is all mine. Death by shiny metal teeth . . . and flatulence. LOL

    Jess
    http://www.anymouse.com

    1. The bot LOOMS!

      1. Crap. Magnus? Where you at, bro?

  11. The solution here is to make sure that governments of the future are structured so that power is specifically enumerated to it, cannot go beyond those limits, and that ultimate authority derives from the consent of the governed. In this manner, really smart, modified evil “people” (really mutants but I digress) will not be able to establish an authoritarian form of government, and use that power to enslave the governed.

    We could write it down, in a document of sorts. And record all the debate surrounding the construction of the document and the intentions of the Constructors. And we’ll have advocates publish defenses of this new type of government that will allay fears of a new tyranny.

    It’ll definately work.

    1. I don’t understand why you are satirizing the living constitution. People totally consent to us taking away their natural rights, I mean they voted for it, right? What more consent do you need?

  12. How steeply? Americans gained about 22 IQ points over the 70 years between 1932 and 2002.

    Or the tests have become increasingly easier to solve. Take your pick.

    1. Flynn’s research looked at comparable questions.

      1. Color me skeptical that a massive increase in the average intelligence of human beings occurred in the space of a generation or two.

        Biology doesn’t work that way for huge populations insulated by numbers from rapid changes.

        1. It isn’t that hard to believe, at all. Consider the strides in nutrition and prevention of childhood diseases. Both of those could play a big role in bumping up the average IQ a few points.

          It isn’t that people have changed but that their chance of realizing their innate capability has improved.

        2. It isn’t that hard to believe, at all. Consider the strides in nutrition and prevention of childhood diseases. Both of those could play a big role in bumping up the average IQ a few points.

          It isn’t that people have changed but that their chance of realizing their innate capability has improved.

        3. I doubt it was a massive increase in average intelligence, although things may well have gotten better. (People certainly got taller within a couple generations.) I suspect it was more of a subtle feedback thing – the more people are exposed to IQ-test-type questions, the more they tend to think in those terms and to expose others to thinking in those terms, even if unintentionally. Basically, we may have slightly structured our culture around standardized tests.

          1. Having spent alot of time in the developing world, I agree that the average intelligence of mankind has increased dramatically in a very short period of time.

            The diminutive stature and impared thinking ability of the truly under-developed parts of the world is mind blowing when seen up close.

            Additionally, the marriage of cousins, a common phenomena outside the Western world, no longer occurs here in the developed world. This also seems to increase the average intelligence of the groups involved.

            This is not to say these other people are stupid. They just don’t think, react, or plan as well as people from a more developed society. As always keep in mind that what is accurate for a group does not apply to individuals from that group either.

            Also consider the nuture effect of receiving large amounts of focused education at a young age and how this may cause the brain to develop in ways that a brain not exposed to this kind of training would not develop.

  13. It’s not lack of brains that stops people building their own nuclear weapons, it’s lack of access to materials.

    1. and the delivery system, logistics, support, maintenance…

      1. You gotta mix it Charlie,
        You gotta fix it,
        Must be love,
        It’s a bitch.

    2. A lotta guys might say those people aren’t smart enough to gain access to materials.

    3. I would like a cup of tea and 5000 pounds of that delicious yellow cake, please.

    4. Yeah but just you wait until a children’s genetic research kit is used to construct the virus that wipes out mankind…

      1. Haven’t you heard? It’s going to be unsanitary phones that wipe out mankind.

    5. What are you talking about? All you need to do is steal some plutonium from Medatomics. It’s a piece of cake.

      1. I hear you can trade a crate of used pinball machine parts to some Libyans for it.

  14. I was wondering when somebody was going to suggest that we need more stupid in the world.

    Mel Brooks – I know you’re writing the script. Stop. Just stop.

  15. This sounds like just another tedious application of the Precautionary Principle.

    Do absolutely nothing that might cause something bad to happen somewhere, sometime?

    Unless, of course, its their pet project:

    More research, they argue, should be directed toward to figuring out how to make people more altruistic and less aggressive.

    Geez, making the human race more tractable, more easily governed. What could possibly go wrong?

    1. What’s not to like?

    2. Or…tyranny is bad…unless I’m the tyrant.

      I didn’t know aggression was inherently evil.

      1. I’m kinda hungry.. Are you made of food?

        1. If you like the taste of pure evil with a hint of cinnamon, sure.

  16. That ship has sailed, guys.

    Using current technology, it’s probably already possible for a determined group to brute-force the development of a virus that could wipe out the human race. All they’d need is the desire and the funding.

    I reach the opposite conclusion of these researchers. Since it’s not possible to stop scientific advancement worldwide (it probably wouldn’t be possible even under a world government, and we fucking don’t have a world government and are not likely to have one any time soon) it is imperative that we cross the machine/man barrier as soon as possible, precisely because we need to preserve humanity from these very real near and medium-term threats.

    If we cross the machine/man barrier, the sensible thing to do would be to upload yourself, set up some kind of nanotechnology maintenance system for whatever “server” is hosting “you”, and then launch yourself into deep space on a trajectory set to never return to Earth. I imagine many, many people would do the game theory and choose to do just that. After that happens, humanity is effectively immortal until all protons decay.

    1. The Bug Life Chronicals by Phillip C. Jennings

      1. I haven’t read this collection, but the one of the ideas in the summary seems to be my first reaction to Fluffy’s response. Being trapped in a mechanical device hurtling through space for eternity, in isolation, without physical sensation, isn’t exactly my idea of a good time.

        1. Matrix it, baby.

          And who says you’d be in isolation?

          A machine consciousness can experience the passage of time at any speed it wishes.

          You could very easily communicate with other “travelers”, even over interstellar distances, simply by blanking over the communications transmission time.

          1. I figured there was an error in my conception somewhere. Thanks for clarifying!

            1. To me the bigger issue is that if you solve the immortality issue, the actual conditions of existence at the moment you solve it don’t matter, because time is literally on your side.

              Say the first version of the “server” you’re in sucks. That would be OK, because you’d then have a billion years (or whatever) to make a better one.

              There’s that old Twilight Zone episode about the guy who becomes immortal, but then is convicted of murder and given a “life” sentence. We’re supposed to think “Oh ho! WHAT A TWIST! It’s like an M. Night film!” but I didn’t think that at all. If the guy’s immortal, he’ll probably outlive our entire system of governance. He has all the time in the world for them to decide to let him go, or for a new Dark Age to descend, or for the frickin’ Day of the Triffids to happen, or whatever. It’s still better to be immortal, because one day he’ll be out of that jail, but you and I will still be dead.

              1. I don’t think the conditions of existence are immaterial. If they are not ideal they can be fixed, but if they are so far beyond ideal as to cause permanent damage to your mind you have an issue. And if you’re talking about transferring consciousness to machinery which might process reality on the order of thousands of times per second, 10 minutes of existence in the first version of the server could be enough of an eternity to drive you completely mad.

                1. Yep! Imagine reading 20 years worth of Fox News user postings. Then having to reread them 40 seconds later because you all ready read everything else 5 times in the 39 seconds before your 2nd helping.

                  Clock speed, Damn you Intel…

                2. I see what you’re saying, but I just don’t know.

                  If I was to die right now, and after I died I opened my eyes and discovered I was in hell and Hitler was my first roommate, I for one would be absolutely delighted.

                  Because I expect there to be nothing at all when I die. I expect to just close my eyes and be gone. Hell would actually be a pleasant surprise.

                  If I’m in hell, maybe someday God will change his mind. If I’m dead and gone, that’s it.

                  1. That clarifies a lot of the difference. I could see the argument about agonizing existence being preferable to no existence, but for whatever reason of my own makeup it doesn’t appeal much to me.

                    Also, after I am slain I fully expect to wake up in Valhalla as a newly minted Einherjar.

                    1. Well here is my optimistic take on increased mental capacitiy ( whatever form that may take ) coupled with the immortality of buglife man-machine hybrids.

                      Think of subjective entertainment possibilities. 50 billion imaginative intellects pursuing their own interests singularly and collectively.

                      Isolation is a choice not an eventuality. Unless its imposed as a punishment. You know, like for accidently choping a section of the north american continent off in time and inserting it into 3500bc or such.

                      Good Times!

                    2. I’d like to point out that you can always opt out of the immortality thing. People that argue that somehow death would be preferable to immortality always neglect to mention that it would be optional. There’s something sick about wanting everyone to die because you don’t want to live forever. I’m not saying that this is your stance but there are people out there that hold that position.

                  2. God can’t change its mind.

                    1. He did twice before.

                      First we had that all, “Hey, live here with me in Paradise…on second thought, get the fuck out and take your bitch with you!” thing.

                      Then we had that whole, “Listen to this guy Moses and do everything he writes down…no, wait, fuck that ‘No pork’ shit, just listen to my man Jesus when he tells you to love your neighbor!”

                      So maybe he wakes up one day, cracks open a cold one, and says, “You know what? I miss all those assholes I sent to hell. Maybe I’ll call ’em up and invite them over for some brews.”

                    2. Redemption, how does it work?

                    3. Maybe when it comes to the end of days we all find out that the whole hell thing was a giant pratical joke, maybe pestilience, and even mortality. He’d be lauhging saying, I got you guys, after the first 10,000 years I thought you guys were about to catch on but no! I was able to pull one over ya.

                  3. Hey for all you know Satan and Hitler might be lovers singing a duet about taking over the earth.

                3. I would hope you would figure out how cron works before you went insane from processing an eternity in a couple of minutes.

    2. ^THIS^

      The single most important step in biological life’s quest for ‘immortality’ is to transcend carbon-based reactions and move to something more reliable and long lasting. Space travel beyond the inner solar system is damn near impossible without it.

      It will probably take 200 years, but dammit we’ve got to try.

    3. “If we cross the machine/man barrier,…”

      Won’t happen. A machine that has learned to be you, will not actually be you. There will be no continuity of consciousness. We may be able to create AI clones of our minds that will be essentially immortal, but we, ourselves, will not share that immortality.

      1. MJ, the cell structure maintaining your conciousness is not the same cell structure you had last year.

        Your sense of self is living inside an imposter! < cue scary music >

        1. Any download of your consciousness that is capable of existing independently of the original you will be by definition, a copy and not you. I am not suggesting that this is scary, just pointless as a road to immortality.

          1. That’s not the way I see the barrier being crossed.

            I don’t see the possibility of “upload”. That concept does seem to have the problems you are highlighting here.

          2. Aye transferring the conciousnes is not the main goal. Upgrading the existing structure with engineered replacements as the cells degrade and fail is the most certain path.

            As per Dick Cheny with his new heart. I just dont like the design.

      2. I would tend to agree with you.

        BUT. But.

        My brain is made of billions of cells, some of which are dying every day.

        When those brain cells die, I don’t suddenly become “not me”. Because no individual cell or group of cells is apparently necessary for my “continuity of consciousness”, as long as the whole emergent property that is my brain keeps on ticking.

        But let’s say that instead of those cells dying, they were replaced by machine elements at the microscopic level that replaced the function previously held by those cells.

        If I can be “me” even if those cells completely die, it seems to me that I’d still be “me” if those cells were replaced by machines.

        And what happens if we do that every day? If we replace every brain cell that dies with a machine equivalent?

        When am I not “me” any more in that scenario? Is there a threshold I’d pass beyond which I’d no longer be me? More importantly, would I be able to perceive the crossing of that threshold, or would it be invisible to me?

        Eventually all my brain cells would be dead and I’d be all machine. But would I ever have noticed?

        1. “If we replace every brain cell that dies with a machine equivalent?”

          Honestly, I do not know. Worse, I am not sure we can know with any certainty if it worked (though there may be ways to know if it does not). My point is that any machine copy of you must needs be a copy and have a separate existance from the original.

          1. The difficulty with that point of view is that, as Phlogistan points out, the biological components of your consciousness are changing all the time.

            So your consciousness therefore cannot be irrevocably bound to any particular privileged biological component. It’s got to be tied to the overall neuromechanical system, or you’d be “dying” every day and being replaced by a biological copy that was too dumb to know it wasn’t you.

            And if it’s tied to the system, that opens the door to at least the possibility of systematic piecemeal replacement and enhancement of individual components of the system, while your consciousness keeps chugging along undisturbed.

            1. “…that was too dumb to know it wasn’t you.”

              Not too dumb, too ignorant. Your machine self has no basis of comparision that it does not share continiuity with the biological orginal if all the memories are intact.

              I more see this as a thorny metaphysical issue, particularly since your method requires the biological process of replacing cells must be stopped to enable the machine replacement. If it does not work as advertised, you will have committed a protracted suicide.

              1. As it now stands we all are practicing a protracted suicide.

                Gia kills and eats her children.. every last one..

                1. So do I, Gaia.

                  1. There you go again with those negative waves!

                2. But to do what you and Fluffy propose may not be immortality but an early checkout if there is no continuity of consciousness.

                  1. But to do what you and Fluffy propose may not be immortality but an early checkout if there is no continuity of consciousness.

                    Every time this comes up, I wonder:

                    Is there continuity of consciousness when you’re knocked out? When you sleep? Does the person who is you die every time you nod off? If so, then there’s no problem with upload. If not, why?

              2. The real question is: what is the basis for your apparent assumption that the process of electro-mechanicl replacement is qualitatively different from that of biological replacement.

                1. I do not know if there is and I do not know if there is not, and I do not think that there is any good way to test it.

                  What is your basis for assuming that a speculative process of AI replacement would be qualatively similar to biological replacement? My thoughts here is we are dealing not with concrete science and technology but with wishful thinking that can dismiss all difficulties as irrelevant because it does not have to tackle them.

                  1. Dorm room talk with enhancements.

                  2. I am not making the claim here that they are similar, its just that in previous posts you seemed so darned sure that no AI replacement could ever be real. This lates post is much more reasonable in my opinion. I agree that we don’t know jack diddly poo, but its fun to speculate.

            2. Your consciousness isn’t necessarily tied to any biological compoment. You know the whole concept of soul and whatnot. You certaintly raised an intersting point Fluffy. I have never thought about the fact biological life and consciousness. Either your conciousness is linked to something ephemeral like a soul, in which case it seems to me ‘uploading’, for lack of better word, yourself from biological to mechanical parts would be an impossibility; or your consciousness is linked to the general neuromechanical system, which raises the questions of do we need any biological neural components to remain ourselves or could we simply slowly replace those biological parts over time with mechanical ones while still retaining ‘ourselves’. This is a milk and cookies question to be sure.

      3. Should technological progress be stopped until moral progress catches up?

        To think we have ever progressed morally is vanity. I would argue morality goes in cycles but never progresses.

        1. I’d argue that Morality is actually a null state phenomena. It’s only subjectively measured using anchor points randomly placed as desperation mounts.

  17. Consider also that there is evidence that average human intelligence has been significantly enhanced in recent years. Specifically, University of Otago (New Zealand) political scientist James Flynn discovered that average IQs in 30 countries have been steeply rising in the 20th century.

    Verbal and Math tests have maintained their predictive power (g-loading) and have only risen 3 and 4 points IIRC in the last century. Meanwhile the other subtests, to varying degrees, have lost predictive power. Longitudinal studies show the exact same thing as people age. This suggests that raw brainpower has increased a little, but the vast majority of the 22 points is lifestyle catching up to the test (the inverse WRT aging).

    Take a hypothetical athletic test in which each subtest is correlated with the others, e.g. 40 yard dash, broad jump, and long toss. That test will measure athleticism well in baseball playing cultures and non-baseball playing cultures. But it wouldn’t mean much to compare the scores. Now you could argue that learning to throw a ball does increase your athleticism some. OTOH that may come at the cost of learning to kick a ball. Either way, if the two cultures and are basically the same at the 40yd dash and the broad jump, then they’re basically the same.

  18. Will cognitive enhancement destroy the human race?

    As long as we’re bootstrapping up to homo transapiens this would be a good thing.

    Especially if I get to come along for the ride.

  19. I love this topic. However, I have never encountered anyone that acknowledges the very most important flaw in our unintelligent understanding of intelligence. Can you land half way on the moon? Sure, for a moment in time. Eventually you either land, or you float away, you cannot land in fractions (stolen from a movie). This is basic dialectic reasoning. If intelligence is enhanced, how can it possibly be only partially enhanced? This is a very serious problem with most theories of intelligence enhancement. It is either intelligence enhancement – the most abstract and comprehensive ability to understand cause and effect – or it isn’t. There is no evidence of part partial intelligence enhancement – the enhancement is full spectrum, advanced ability to predict outcomes of actions.

    1. Can you land half way on the moon? Sure, for a moment in time. Eventually you either land, or you float away, you cannot land in fractions

      Very bad analogy, you can hang out in the Lagrange points indefinitely, or be in orbit around the moon, or delicately balance yourself at the center of force of the Earth-Moon system.

    2. That metaphor doesn’t really make sense to me. Human intelligence has a myriad of components. It isn’t an all or nothing item. For example, you could be enhanced to acquire new languages more readily but not gain any greater ability at mathematics and still hit a wall when studying calculus. Or just the opposite. You could find yourself able to master mathematical concepts you’d always struggled to learn but remain unable to learn any languages beyond what you were taught as a child.

      These differing enhancements would lead to different choices in your life but both would count as enhanced intelligence.

    3. I read that they’ve already tied a PC (basically) into somebody’s brain. Gives the best of both kinds of “brains”, because computers are good at all the things humans aren’t, and vice versa.

      They said it wasn’t long before this guy felt that the computer was part of him, part of who and what he was.

      Intelligence “enhancements” could take many forms.

      OTOH, you are right. A is A. The computer is, or is not, connected to the brain-bone.

      Mostly likely, I think, is that we who are people, will evolve ourselves into something that is in no particular way “people” as we now know them. We shall become a derivative of our former Self.

      If you think Zen is cryptic now, just wait.

      Of course this makes it clear that Fluffy’s boob fetish has condemned him (her?) (it?) to people as now know them.

  20. Shorter version: it’s the difference between being smart and being wise.

    Larry Niven explored this in his short stories about Gil the ARM. One of ARM’s responsibilities was supression of technology that would destabilized society.

    If ARM had existed at the time (in the stories) when commercial fusion was invented they would have suppressed it, because it gave millions of people access to their own thermonuclear bomb–but ARM came in after that.

  21. Advancement is always scary, but you can’t make a decision based solely on the costs without taking the benefits into consideration. Think of the amazing benefits that exponentially increasing mental capacity could enable. Use your imagination to think optimistically for a moment. Those possibilities are worth the risk.

    1. ok, here I go imagining.. Hmmm boobs

      Drat ok! This time I’ll imagine exponentially increased mental capacity!

      Hmmmm… 8 boobs

      Drat…

      1. I would think, “Once I have married my immortal consciousness to nanotechnology that can rearrange the environment, I’d take over some small planet somewhere and re-run evolution on it until I evolve up some humanoid life forms, the female variety of which will have REALLY BIG BOOBS!”

        1. I think its like Mann’s graph. Does not matter what data is input. Even if you exponentially increase the capacity, you get the same results.

          Your still playing with a base programming of evolutionary imposed data sets.

          Therefore, Boobs!

          1. Since I don’t see any way to improve on boobs, I have no problem with that.

            The point of enhanced intelligence is more and perkier boobs, and not somehow a transcendence of boobs.

            In fact, anyone planning trying to talk me into an enhancement that would take boobs away from me can go try to sell that shit on the other side of the street, ’cause I ain’t buying.

            1. So you’re rooting for homo transboobien, then?

            2. They could lactate liquor rather than milk

  22. So, if everyone is smarter doesn’t that mean the watchers are smarter as well? Wouldn’t that just mean that the amount of intelligence needed to destroy the world would increase with the intelligence levied against such actions?

    1. Skr – nice! But, no. Dont need to know how to design the gun.. Just got to pull the trigger.

      Beside’s all the really scary things out there to destroy the world have not a single point of intelligence behind them. Gamma ray bursts, nova’s, a really nasty solar flare. None respond to “Stop Resisting!” very well at all.

      1. For some reason I don’t imagine we’ll see “thermonuclear warhead shops” anytime soon like we do gun shops. In which case you would have to know how to “design the gun”, warhead, virus, or genesis device as the case may be. Maybe that’s because we have reached a point in moral evolution where it really isn’t that dangerous to have gun shops. As far as the gamma ray bursts, I think I would err on the side of having as many intelligent people around to counteract that (or at least detect it) instead of worrying about Dr. Evil.

        1. “As far as the gamma ray bursts, I think I would err on the side of having as many intelligent people around to counteract that ”

          I like it! We breed intelligent people to use as an orbital shield against incoming GRB! A layered defense would be best.

          Sooo how many Elite’s would it take to absorb the incoming blast from a GRB?

          1. as an aside.. Nukes really don’t give me much concern considering their actual damage output.

            The strange Palmdale Bulge, Siberian Traps, Yellowstone and up to 50 other large scale volcanic event structures dwarf all nuclear devices for destructive power.

            Dr. Evil has a lot to learn…

            1. It’s impossible for current nukes to kill off every person everywhere. 6 or 7 billion people scattered in virtually every habitat everywhere are VERY hard to wipe out. hell, even the bubonic plague only took out about half of Europe’s population in a time of dismally poor medical science.

              a handful of evil people can’t take out all the rest of us.

              1. And if we colonize space before annihilation due to the enhanced intellect, the argument falls completely flat.

            2. …Palmdale Bulge…

              Sounds like a gay bar in the Antelope Valley.

          2. We had better start breeding them fast. We’re going to need a lot of mass.

  23. Please allow me to
    introduce myself….

  24. (More research, they argue, should be directed toward to figuring out how to make people more altruistic and less aggressive….some brain chemicals like oxytocin and selective serotonin reuptake inhibitors already promote trust and increase cooperation. Once developed, they argue that “safe, effective moral enhancement would be compulsory.”)

    M-I-R-A-N-D-A

  25. don’t get in the way of this, Morons! This kind of tech could help my Asperger syndrome and ADHD, and my sister’s RA!

  26. So if I read this correctly, the key to offsetting the evil downside of “cognitive enhancement” is “moral enancement”? But I thought Religion was the cause of all war and violence. I’m confused (not really). I’m pretty sure after reading this that moral enhancement is precisely why we haven’t already destroyed ourselves. Apparently we’re going to have to speed up the “moral enhancement” to keep up with the “cognitive enhancement? Maybe Dr. Wayne Dryer can help us with that? Really, is this what they do at Harvard,now? I thinkt he authors are cognitively over-enhanced to the point of diminishing returns. What a pile of crap this article is!

    1. But it’s fun rubbing their face’s in it.

  27. So what if the human race is destroyed? It’s going to happen sooner or later anyway. A billion years from now whatever life exists of Earth won’t have any idea that humans were ever here.

    1. It’s just want it to put up a SERIOUSLY epic trans galactic battle before going down.

      Sniveling and whimpering into the nothing irks me.

      1. Yeah, dammit. Lasers!

  28. I personally use an oxytocin based sublingual drop supplement and will say it has helped improve my moral attitude, as well as my relationships, my health, likely spared me divorce. I think this could be the stuff that creates world peace. I’m all for the research.

  29. Wait, I’m confused. They say “safe, effective moral enhancement would be compulsory”, but forcing such an option on people would be completely immoral. *ERR: logical fault* That should completely demolish any argument these guys have about morality, especially of the ‘enhancement’ variety. It seems they have forgotten the last time ‘enhancement’ was made compulsory (i.e. Nazi Germany).

  30. More research, they argue, should be directed toward to figuring out how to make people more altruistic and less aggressive.

    More research, I argue, should be directed towards figuring out how to teach academic philosophers what ethics is really about.

    Since when is “altruism” the essence of ethics? I give them a D on their first term paper for this. “Altruism is good because, well by-god it’s just good and I can feel it in me bones, aye.”

    1. Those bozos seem to forget that people already have been killed using altruistic excuses. Both Stalin and Mao claimed that the millions that died were done for the greater good.

  31. Surprise! You truly covered this subject well. Are there other choices that I will need to check out?


  32. Fluffy|9.20.11 @ 5:37PM|#

    Using current technology, it’s probably already possible for a determined group to …

    ‘Group’ is the key word. The focus on just individual intelligence is somewhat misguided, if the worry is existential threats. Technology is rapidly increasing the ability of groups, including distributed and loose knit groups, to cooperate and share expertise. That’s one of the themes, as I read it, of Vinge’s excellent ‘Rainbows End’

  33. Sad thing is, even with superintelligence socialism still couldn’t work.

  34. No reason to think our enhanced intelligence wouldn’t also allow us to counteract these evil schemes as well as concoct them.

  35. One guy with sufficient intelligence and sufficiently absent morals could, with the cooperation of an amazingly small number of dupes, already eradicate most of the human race, using engineered bioweapons.

    To someone with the right sort of education, the only hard thing about making bioweapons is making them _safe_. (That is, making sure they only kill who you want to kill, when you want to kill them.) If you’re just flat-out psychopathic and are fine with killing pretty much _everybody_…well, that’s not nearly as tough.

    The human race has, for decades now, owed its continued survival to the fact that flat-out balls-to-the-wall psychopathy on that scale is really rare. Because if “Dr. Evil” were real, he’d have already won, and we’d already be dead.

  36. “Sure as I know anything, I know this – they will try again. Maybe on another world, maybe on this very ground swept clean. A year from now, ten? They’ll swing back to the belief that they can make people… better. And I do not hold to that. So no more runnin’. I aim to misbehave.” — Malcom Reynolds, Serenity

  37. I was surprised to read that there’s been, on average, less than 1 international conflict per year in the new millennium. Call me crazy, but I could have sworn there have been at least 2 such conflicts each year since 2003: Iraq and Afghanistan. A minor quibble largely irrelevant to the debate, but just sayin…

  38. Libertarians should start reading “IQ and the Wealth of Nations” by Richard Lynn on the double.

    High average IQ and economic freedom are both the most important predictors of a society’s prosperity (as reflected in GDP/capita).

    Libertarians should stop ignoring the first and realize that high average intelligence of the population is a prerequisite for a libertarian state (smart people are more likely to be libertarians).

    Apparently, however, many people are of the impression that Congo or Somalia are just as likely to successfully implement libertarianism as Hong Kong and Singapore. This is false, and the reason has to do with the average intelligence of the populations living in those countries/territories.

Please to post comments

Comments are closed.