"May Cause Gambling"
Some Parkinson disease patients taking drugs to boost their dopamine levels believe that it not only makes them looser physically, also sometimes morally. People with Parkinson disease have reduced dopamine levels in their brains which causes body tremors and, in the worst cases, immobility. Neuroscientists believe that dopamine is also the brain's own "feel good" drug. In most people dopamine activates the brain's reward circuits encouraging them to repeat behaviors that elicit the shot of neurotransmitter.
Now the Washington Post is reporting that the dopamine boosting drugs when taken by a very few Parkinson's patients so overload their pleasure circuits that formerly staid people become promiscuous and gambling addicts.
The article was interesting, but I was fascinated the following observation by the Post reporter:
The notion that brain chemicals play a powerful -- but hidden -- role in human behavior is at odds with American convictions about free will and choice. Kanuch and other [Parkinson's] patients said they spent years believing they were responsible for their actions, only to find that the impulse for self-destructive behaviors vanished once they stopped taking a drug.
Drugs playing a powerful role "at odds" with convictions about free will and choice? Perhaps, but Americans are quite happily gobbling up pills to fix their depressions, insomnia and attention deficits--all of which might once have been blamed on demons or lack of character. And whatever "free will" is, it operates through brain chemicals (unless you believe the Cartesian notion that the soul controls your body and brain by sending immaterial signals through the pineal gland).
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Des Cartes can bite me. That is all.
On a completely unrelated note, I have a suggestion for the book review section of Reason. How about Adapting Minds by David Buller. It is supposed to be the strongest critique of evolutionary psychology thus far. So I hear at least. Perhaps you guys could get Steven Pinker to write the review.
...unless you believe the Cartesian notion that the soul controls your body and brain by sending immaterial signals through the pineal gland...
Heh heh, those wacky substance dualists.
Drugs playing a powerful role "at odds" with convictions about free will and choice?
I think it's more at odds with the popular perceptions of morality, i.e., that moral choices are less biochemical (=more mystical) than other choices like, say, deciding to go to sleep.
FWIW, when I sold my soul to Microsoft they took my pineal gland with it.
My amateur understanding:
serotonin - sense of well-being (not necessarily pleasure; also possibly integral to a "wireless" nervous system in the gut)
dopamine - ability to act and problem solve
norepinephrine - ability to decide
My guess is that they were probably already behavioral addicts, only they didn't have the "juice" to actually act on it until they took the drug that raised their dopamine levels.
Please include the standard caveat that I don't know anything about these people or much at all about the intricacies of psychopharmacology.
I'm not sure about dopamine and norepinephrine, but serotonin levels have a lot to do with both depression and obsessive compulsive behaviors, because SSRIs such as Prozac work well on those disorders by inhibiting the reuptake of serotonin.
In the same continuum as OCD behaviors are such things as gambling, body dismorphism and hypochondriasis.
Perhaps you guys could get Steven Pinker to write the review.
That would be SWEET!
The brain responds to chemicals, and also creates chemicals.
It's two mints in one.
Pinker is too busy with big issues, like How The Mind Works. I'm more into details, loved the book Does The Mind of Stephen Pinker Contain Any Deep Insight into Any Issue What So Ever?
I'm not sure what qualifies as "deep insight" for you, Juice, but for a layman like myself, Pinker's books are thought-provoking and interesting. While I'm sure his theories are far from perfect, I don't think it would be fair to descibe them as shallow, or as repeating conventional wisdom.
Why is it that every time I read about this story, the increased gambling is given more attention than the increased sex? I would think that "Parkinson's Treatment Turns Patients Into Nymphomaniacs" would be the more effective headline.
Some of Pinker's core arguments are hard to refute. The computational theory of mind has more support than any other option, and most other options aren't very serious.
Given that, it is hard to argue that mental modules weren't evolved.
Where it gets shaky to me is when specific claims about specific behaviors are made.
Parkinson's Treatment Turns Patients Into Nymphomaniacs
[Googles for "Parkinson's Treatment Center Dallas"]
Jason- I'm only tangentally familiar with the mental modules argument (How the Mind Works has been in my to-read pile for a while now), but I tend to agree. To suggest that at least some elements of the human mind and psychology are not the result of evolution is prima fascia silly. The real question, IMO, is to what extent behaviors can be explained by the evolved structure of our brain. I once heard Orson Scott Card state that "99% of human behavior can be explained in terms of monkey behavior." While Card was being a bit tongue-in-cheek, and was clearly exaggerating for effect, it's hard to deny that on some level what he said makes sense. If you think of humans as herding animals, and view the actions of groups as being relective of the actions of a gaggle of monkeys fighting for food, territory, and the supremacy of their herd, a lot of things suddenly start to make sense.
I am not, of course, suggesting that all human psychology is derived from early primates. How much can be explained that way, and to what extent the negative tendencies associated with that sort of psychology can be mitigated, is an interesting question.
If nothing else, Pinker deserves credit for demonstrating just how absurd the notion of a blank slate is.
The Blank Slate is the only Pinker I've read, because it was painful to see potentially useful information buried in rhetoric and divorced from statistics. Additionally, Pinker could have found much more worthy adversaries.
Overall it looked to me like Pinker and his editor chose to write a popular book rather than a cogent one. I'm not blaming him for doing that, but the result is a book I can't recommend to people who might benefit from some of his nuggets of truth, because they'll be distracted by the straw men, special pleading and undocumented anecdotes.
Perhaps How The Mind Works is much better, but it never makes it to the top of my reading list. Cha-ching, thanks for pointing out Adapting Minds.
"Some of Pinker's core arguments are hard to refute. The computational theory of mind has more support than any other option, and most other options aren't very serious."
There are some nice refutations of the computation theory of mind out there. In fact, the computational theory of mind has a unique feature in that it requires all knowledge to be innate (Fodor 1980), which is clearly hogwash. The problem comes when you try and tie mental representations to meaning... the computational theory has no way to do this that does not lead to an infinite regress of higher level representations.
I would recommend "Catching ourselves in the act"
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=7749
for a nice critique of the compuational/representational view of mind.
Pinker is very smart. His theory, however, is not correct or suppported by the available evidence on human development.
Check out the work of Tomasello, Deacon for a more reasonable set of theories.
MSM,
I fail to see how the computational theory of the mind requires all knowledge to be innate. I don't have Fodor 1980 to see specifically what you're talking about, but the overview of CTM in the Stanford Encyclopedia of Philosophy doesn't suggest anything of the sort.
Books contain knowledge. Believers in computational psychology have seen books. Knowledge in books is not inherently innate.
My understanding of the computational theory of the mind is that the mind is like a computer, one with firmware that helps it get started. There's a lot more knowledge in my computer than it came with from the factory.
Perhaps someone will argue that there's a difference between information and knowledge, but even such an argument won't get us to your (I am not going to attribute it to Fodor since I can't see his work) end-point.
If you dismiss the data I've loaded onto the computer as not being knowledge, what do you say about the programs I've loaded? Those programs are knowledge to the computer. They are ways for it to interact with the outside world that it didn't have before. If you dismiss programs as knowledge, then that dismissal itself requires a definition of knowledge that winds up with a tautology using such a narrow definition that the claim "all knowledge is innate" is uninteresting and certainly not a refutation of the computational theory of the mind.
However, I've never seen anything to suggest that even all the knowledge stored in an adult's brain is innate, much less all knowledge being innate.
BTW, Catching Ourselves in the Act looks interesting, but the synopsis on the MIT Press page states
To me there's nothing incompatible between interactive emergence and CTM. The first sentence of the final paragraph of 3.3 of the Stanford paper suggests the same:
I'm not suggesting that interactive emergence is a type of connectionist network, I'm just claiming that CTM accommodates more than what is traditionally thought of as rule-based symbol manipulation.
It's funny how contemporary scientific findings and machinary inspire explanations of Life, Universe, Everything. (Although the correct answer is 64.)
I bet there were thousands of Pinkers in Newton's time, ready to explaining How The Universe Works. Just like the human mind is now by some considered to be just a complex computing system, the universe was considered by many to be a kind of a complicated clock, lot of machinery rotating about according to the laws of Newtonian mechanics. People want simple, solid explenations of the whole. It's comforting to add: This is in our scriptures. Or: Newton's calculations support this.
Great scientists were at their best when they occupied themselves with some specific problems. Their general metaphysical explanations seldom stand the test of time. To quote William Blake: General knowledge is the kind of knowledge idiots possess.
Now, when you pass through Las Vegas, beware if the coffee tastes funny. They'll stop at nothing to get you into the casino...
Juice,
What are you saying? Do you believe that Cartesian dualism is still a viable alternative to CTM? Or are you a believer in CTM overall, but that Pinker's specific explanations of how the computing is done are overreaching?
As you can read above, MainstreamMan and I appear to have different ideas concerning the scope of CTM, so by providing your stance on cartesian dualism, I'll better understand where you're coming from. If you are a dualist, are you also a deist? theist?
"To quote William Blake: General knowledge is the kind of knowledge idiots possess."
Relax killer
Anon2
"Books contain knowledge. Believers in computational psychology have seen books. Knowledge in books is not inherently innate."
Books actually contain representations of knowledge, not knowledge (the book doesn't know anything). The problem comes in how you get from representation to knowledge. This is usually referred to as the grounding problem, and to my knowledge no CTM has tackled it seriously. It is usually just set aside. But since CTM is trying to explain knowledge, the grounding problem is one that needs to be faced. Fodor's reasoning is complex, but the basic kernal is that if the mind creates meaning by computing with representations of knowledge, it can not create knowledge that is not a combination of already existing knowledge (combinations of representation through computation).
Hendriks-Jansen's model is not computational or connectionist (which is just a variant of CTM). I am not sure his alternative solves the grounding problem, but it gets closer than CTM. He does, however, do a nice job of showing the weaknesses of the CTM.
Anon2,
BTW, Hendrik-Jensen is working along lines similar to Brooks and company...
For those interested here's a Mind and Language journal featuring Pinker and Fodor duking it out.
http://www.blackwell-synergy.com/servlet/useragent?func=synergy&synergyAction=showTOC&journalCode=mila&volume=20&issue=1&year=2005?=null&cookieSet=1
Pinker's response to Fodor:
http://72.14.203.104/search?q=cache:JVf5aAIPW8cJ:pinker.wjh.harvard.edu/articles/papers/So_How_Does_The_Mind_Work.pdf+So+How+Does+the+Mind+Work%3F&hl=en&gl=us&ct=clnk&cd=1
And whatever "free will" is, it operates through brain chemicals (unless you believe the Cartesian notion that the soul controls your body and brain by sending immaterial signals through the pineal gland).
Posted by Ronald Bailey at March 24, 2006 10:54 AM
don't be ridiculous, Ron. everyone knows it works by sending signals through the hypothalamus.
sheesh.
Dopamine elevating drugs don't alter a person's moral sense, they merely dramatically increase the strength of pleasure signals transmitted by dopamine. The drugs increase temptation by making activities feel much more pleasurable than before. People who previously found gambling mildly amusing or even boring now get a tremendous rush for it.
Morally, it is like the sexual arousal cycle. Your more likely to give into temptation and do something stupid after a dry spell when the sexual arousal signals are strong than right after you just got some.
This is a really old effect first noticed back in the 60's with L-dopa. It is all the more dramatic because it often occurs in elderly people who, even before there illness, had long since moved beyond most temptations. Suddenly, risky behaviors feel as good or better than they did when they were 18. No wonder people are tempted.
MSM,
There is a difference between a container and that which it contains. Books indeed are representations of knowledge. But those representations contain knowledge.
The point I was making is that it's unlikely that Fodor 1980 states what you've claimed it states. Since CTM is supposed to be a theory of the mind, and since it's well known that minds can gain knowledge from outside sources, e.g. books, CTM would be useless if it were to require all knowledge to be innate.
Even if you don't like my use of contains, and we stick with "representations of knowledge" my point holds. Since a quick Googling didn't turn up the text of Fodor 1980, I'm not going to bother to read it. It's highly unlikely to me that you're representing it correctly. If you, or anyone else, can quote some text that backs up your assertion, I'll read it.
emme,
Thanks. I couldn't use your first URL, but last night I skimmed the twenty-odd pages that your second ones pointed to. If I read it right, it looks like Pinker is taking Fodor to task for, among other things, an excessively narrow CTM definition (which Pinker claims Fodor doesn't use consistently). In that regard, I agree with Pinker. If one is allowed to over specify what CTM entails then indeed it's possible to refute some of these particular instances of CTM, but that clearly has no bearing on a broad CTM definition.
I used the Stanford Encyclopedia of Philosophy entry as evidence that a broader CTM is out there, because I'm lazy. If I had spent more time searching, I probably could have found better examples, but this isn't a topic I particularly want to spend much time posting about. However, you're sufficiently familiar with the literature, would you like to comment on MSM's statement
and whether that's a fair conclusion to be drawn from Fodor 1980?
Anon2,
I'm just saying that there seems to be a funny resemblence between the over-simplified, mechanical post-Newtonian image of matter and the cosmos and the insights of the likes of Pinker on the subject of human mind, awareness, AI &c. But surely Cartesian dualism isn't the only option to CTM. Neither of the schools of thought seem to put much stress on how the actual organic existence is in constant feedback (not just feeding) with all mental development, including computing ability. What's happening inside Deep Blue has very little to with what's happening in Kasparov's head. When will IBM come up with a machine that can do what Kasparov can do with his mere 20 (?) calculations / s ? My guess is never, and that's not what they're aiming at anyway.
What am I saying? I think MainstreamMan put it well:
Books actually contain representations of knowledge, not knowledge (the book doesn't know anything). The problem comes in how you get from representation to knowledge. This is usually referred to as the grounding problem, and to my knowledge no CTM has tackled it seriously. It is usually just set aside. But since CTM is trying to explain knowledge, the grounding problem is one that needs to be faced.
Anon2,
I'm just saying that there seems to be a funny resemblence between the over-simplified, mechanical post-Newtonian image of matter and the cosmos and the insights of the likes of Pinker on the subject of human mind, awareness, AI &c. But surely Cartesian dualism isn't the only option to CTM. Neither of the schools of thought seem to put much stress on how the actual organic existence is in constant feedback (not just feeding) with all mental development, including computing ability. What's happening inside Deep Blue has very little to with what's happening in Kasparov's head. When will IBM come up with a machine that can do what Kasparov can do with his mere 20 (?) calculations / s ? My guess is never, and that's not what they're aiming at anyway.
What am I saying? I think MainstreamMan put it well:
Books actually contain representations of knowledge, not knowledge (the book doesn't know anything). The problem comes in how you get from representation to knowledge. This is usually referred to as the grounding problem, and to my knowledge no CTM has tackled it seriously. It is usually just set aside. But since CTM is trying to explain knowledge, the grounding problem is one that needs to be faced.
Anon2,
I'm just saying that there seems to be a funny resemblence between the over-simplified, mechanical post-Newtonian image of matter and the cosmos and the insights of the likes of Pinker on the subject of human mind, awareness, AI &c. But surely Cartesian dualism isn't the only option to CTM. Neither of the schools of thought seem to put much stress on how the actual organic existence is in constant feedback (not just feeding) with all mental development, including computing ability. What's happening inside Deep Blue has very little to with what's happening in Kasparov's head. When will IBM come up with a machine that can do what Kasparov can do with his mere 20 (?) calculations / s ? My guess is never, and that's not what they're aiming at anyway.
What am I saying? I think MainstreamMan put it well:
Books actually contain representations of knowledge, not knowledge (the book doesn't know anything). The problem comes in how you get from representation to knowledge. This is usually referred to as the grounding problem, and to my knowledge no CTM has tackled it seriously. It is usually just set aside. But since CTM is trying to explain knowledge, the grounding problem is one that needs to be faced.
Anon2,
I'm just saying that there seems to be a funny resemblence between the over-simplified, mechanical post-Newtonian image of matter and the cosmos and the insights of the likes of Pinker on the subject of human mind, awareness, AI &c. But surely Cartesian dualism isn't the only option to CTM. Neither of the schools of thought seem to put much stress on how the actual organic existence is in constant feedback (not just feeding) with all mental development, including computing ability. What's happening inside Deep Blue has very little to with what's happening in Kasparov's head. When will IBM come up with a machine that can do what Kasparov can do with his mere 20 (?) calculations / s ? My guess is never, and that's not what they're aiming at anyway.
What am I saying? I think MainstreamMan put it well:
Books actually contain representations of knowledge, not knowledge (the book doesn't know anything). The problem comes in how you get from representation to knowledge. This is usually referred to as the grounding problem, and to my knowledge no CTM has tackled it seriously. It is usually just set aside. But since CTM is trying to explain knowledge, the grounding problem is one that needs to be faced.
I feel like I'm repeating myself
Juice,
Perhaps your universe as clocks is a good analogy. OTOH, in the early days of digital computing, analog proponents had a number of misbeliefs with respect to what you can represent with binary data.
If you think the grounding problem is a legitimate objection to CTM, does that mean you think that Searle's Chinese Room is a legitimate objection to strong AI? If you do, there's no way I'd attempt to convince you otherwise by just typing a few paragraphs into a blog and I don't think we would have enough common ground to make discussion of the grounding problem worthwhile for either of us.
My opinion is that Searle's Chinese Room is as valid an objection to strong AI as it is to brain-centered AI. After all, the individual neurons and all the supporting matter are only following a set of rules (physics) and know nothing individually of the intelligence that they provide as part of a system. Since I'm not a dualist, that leaves me thinking that the Chinese Room is an uninteresting tautology disguised as a thought experiment.
Either way, I'm using a broad definition of CTM. As far as I can tell my working definition is compatible with some people who actually care about this stuff, but is incompatible with others (or in some cases incompatible with others when they're making specific arguments where a narrow definition us useful).
BTW, I didn't go into the grounding problem in my response to MSM, because I'm a fan of fixed goalposts. My initial reply to his post was over his claim that Fodor 1980 stated/implied/required/whatever that CTM requires all knowledge being innate.
I couldn't find anything on the net to suggest that his representation of Fodor 1980 is valid, which suggests to me that it would be a waste of time to track down the paper, read it and then point out that it says no such thing. There's a chance that I'm wildly wrong and that it says so in so many words or that everyone reading the paper comes to the same conclusion, but I'm willing to take that risk.
Anon2,
I am not mischaracterizing Fodor. I don't have the text handy, but here is a summary from Benny Shannon (1993)..
"For anything to be processed by this system [a representation based computational mind], it has to be represented...this is tantamount to saying that all pertinent distinctions have to be already specified in the system. Thus, the set of distinctions, aspects and dimensions to which the cognitive system can relate--those it can recognize and process-- have to be already specified before the system can do anything....[this] implies radical nativism.
"This has been argued for most explicitly by Jerry Fodor in The Language of Thought (1975). Fodor points out that computational operations manipulate symbols in given representations; they may organize these symbols and rearrange them, but they cannot create new distinctions in the representational system. Thus, in effect, cognitive activity consists of the selection of hypothesis from given represented structures. Consequently, the computational operations do not change the expressive power of the representational system. Again, this holds for all occurences of processing, at all stages of development.
"In sum, all representational models have to assume an already existing, non-empty representational core....But Fodor goes further. Both in The Language of Thought and elsewhere (Fodor et al, 1980; Fodor 1891), he not only argues for the existence of an innate representational core, but propounds a radical stance by which practically everything that one knows is innate...therefore, by and large, all that one knows has to be already known at the outset of one's cognitive life. In other words, all knowledge is innate...
"Fodor himself characterizes his radical nativistic conclusion as "blantantly preposterous", yet he finds it inevitable none the less..."
This is an old position that goes back to Plato's Meno.
Re: dopamine
In addition to its role in reward pathways, there's also been work (in animal models) indicating some dopaminergic drugs (D3 agonsists, if you care) increase risk-taking behavior.
Anon2,
the analogy of the universe as one huge clockwork in one huge room seemed like a good one to good deal of people before Einstein's time, many christians and deists included.
I'm just as much in awe as anybody of what can be done with binary data, but I think that the amazement at these innovations can also blind us so that plain metaphysical assumptions are mistaken for hard science.
> After all, the individual neurons and all the supporting matter are only following a set of rules (physics) and know nothing individually of the intelligence that they provide as part of a system.
But there is no neuroscientific theory of consiousness as a whole, only knowledge of some correlations related to the phenomenom. If you kick a corpse and a living human being, the same laws of physics apply to both. Also, neither the cells in the corpse nor the cells in the living guy experience the pain.