(Page 2 of 7)
But if there are reliable patterns -- if there's a degree of determinism -- then we can take steps to protect ourselves.
Reason: Would a deterministic world mean that, say, the assassination of John F. Kennedy was going to happen ever since the Big Bang?
Dennett: "Going to happen" is a very misleading phrase. Say somebody throws a baseball at your head and you see it. That baseball was "going to" hit you until you saw it and ducked, and then it didn't hit you, even though it was "going to."
In that sense of "going to," Kennedy's assassination was by no means going to happen. There were no trajectories which guaranteed that it was going to happen independently of what people might have done about it. If he had overslept or if somebody else had done this or that, then it wouldn't have happened the way it did.
People confuse determinism with fatalism. They're two completely different notions.
Reason: Would you unpack that a little bit?
Dennett: Fatalism is the idea that something's going to happen no matter what you do. Determinism is the idea that what you do depends. What happens depends on what you do, what you do depends on what you know, what you know depends on what you're caused to know, and so forth -- but still, what you do matters. There's a big difference between that and fatalism. Fatalism is determinism with you left out.
If I accomplish one thing in this book, I want to break the bad habit of putting determinism and inevitability together. Inevitability means unavoidability, and if you think about what avoiding means, then you realize that in a deterministic world there's lots of avoidance. The capacity to avoid has been evolving for billions of years. There are very good avoiders now. There's no conflict between being an avoider and living in a deterministic world. There's been a veritable explosion of evitability on this planet, and it's all independent of determinism.
Reason: What do you mean when you call human beings "choice machines"?
Dennett: That's actually Gary Drescher's phrase. He's an artificial intelligence theoretician. He distinguishes choice machines from situation-action machines.
Situation-action machines are built with a bunch of rules that say, "If in situation X, do A," "If in situation B, do Z," and so forth. It's as if you had a list that you kept in your wallet and when important decisions came up, you looked at the list. If the conditions for a particular decision were met, you just did it. You don't know why. It's just that the rule says to do it.
A choice machine is different. A choice machine looks at the world and sees options, and it says, "If I did this, what would happen? If I did that, what would happen? If I did this other thing, what would happen?" It builds up an anticipation of what the likely outcome of one action or another would be, and then chooses on the basis of how much that outcome is valued or disvalued.
They're both machines, but one of them is much more free than the other. It's choosing its actions on the basis of its values, and it's choosing its values on the basis of what it knows.
Reason: Where do our values come from in the first place?
Dennett: The Darwinian answer is a really good one. They don't come ex nihilo. They evolve over time. Our responsibility for our values is not absolute and it's not zero. You can't choose who your parents are, you can't choose what culture you belong to, and you can't even choose your kindergarten teacher. But as you mature, you can gradually -- this is the Darwinian part -- incorporate responsibility for your own actions. We try to turn our children into agents that can take responsibility, and then we have to do something that makes parents really anxious: We have to let go. You let go of your children and say, "I've done the best I can. Now you're on your own. I've created this hopefully moral agent and released this person into the world."