Compelling Reasons for Cooperation

|

The Evolution of Cooperation, by Robert Axelrod, New York: Basic Books, 256 pp., $17.95

Must cooperative behavior be forcibly imposed by some central authority, or can cooperation arise spontaneously from the choices of interacting individuals? Antiauthoritarians who challenge the common assumption that cooperation and order proceed from authority and imposed design will be heartened and fascinated by Robert Axelrod's The Evolution of Cooperation. For Axelrod explains how non-imposed cooperation is possible and investigates what ongoing strategies for interaction with other agents best enhance, through mutual cooperation, each strategist's well-being.

Axelrod's theoretical study begins with the assumption that most cooperative opportunities have the structure of a "Prisoner's Dilemma." Suppose that each of two individuals must decide whether to perform some service for the other, and neither can know whether the other's service has been performed until well after the fact. Nor does either individual have access to a "central authority" who can punish nonperformance.

In such a situation, the following will be true: Each party is better off if both perform (cooperate) than if both do not perform (defect). But each party is best off defecting while the other party performs, and each party is worst off cooperating while the other party defects. Each party, with his own self-interest in mind, will choose to defect, because each will reason that if the other party cooperates, he can get the best possible payoff by defecting; while if the other party defects, he can protect himself from the worst payoff by defecting. Unhappily, since each party will defect following this reasoning, both parties will end up worse off than with mutual cooperation. Which all sounds pretty dismal.

It doesn't do to point out the simple and important fact that the interaction between the parties is not a zero-sum game, with gains possible only if one party loses. Although each party stands to gain by cooperation, this does not show how cooperation is possible; for cooperation may still not arise. This is the theoretical context in which we must ask again, how is cooperation without central authority possible? And we must not trivialize this question by simply assuming, for example, that each agent does not have separate aims but in fact cares about the total "social" payoff.

The answer Axelrod investigates is that cooperation is possible under three conditions. First, individuals must believe that they will be faced with similar decision situations in the future. Second, they must believe that what they do now will affect their payoffs in these future situations. Finally, they must care enough about these future payoffs. In other words, through repeated playing out of Prisoner's Dilemma situations, it can become rational to cooperate within particular situations of this sort. In itself, this answer is fairly obvious.

For example, suppose the relevant service on my part is to feed your grandmother on day 1 and the relevant service on your part is to feed my grandmother on day 1. If you cooperate and feed my grandmother, I do best to defect and eat the Lean Cuisine intended for yours. If you defect, I best cut my losses by again saving the Lean Cuisine for myself. You reason likewise. Hence, mutual defection.

But suppose we both have reason to believe that a similar opportunity for cooperation will present itself for a number of successive days. It is still true, on the same psychological assumptions, that the best outcome for me involves your ongoing performance matched with my ongoing defection. But if I start defecting, there is no way that you are going to continue to cooperate and take, for each day of the game, the worst possible payoff. You will defect too, and I know it, leaving me at best with the poor payoff from mutual defection. So what I want to do, instead, is to elicit your cooperation, and the repetition of the game allows me to do this.

By cooperating on the first move I may signal my willingness to cooperate. And perhaps I can signal my ongoing trustworthiness by not seeking short-term gains through defection once I have gotten you used to my cooperation. On the other hand, I may have to make clear my unwillingness to suffer unprovoked defection by promptly punishing such defections by withdrawal of my cooperation. Under certain conditions, at least, our strategies should coordinate into cooperation. Real-world cooperation without central authority represents just such coordination—made possible by the anticipated recurrence of valuable cooperation opportunities.

The most original and fascinating part of The Evolution of Cooperation is Axelrod's investigation of the best strategies for individuals to employ in our real world of repeated Prisoner's Dilemmas. Axelrod devised a variety of computer simulations of tournaments in which diverse programs played against each other. Although no strategy turns out to be best under all possible circumstances, the marvelous core result of these tournaments is the remarkable success of a strategy called Tit-for-Tat. Tit-for-Tat is a nice strategy—it cooperates until the player with whom it is interacting defects. But it is also provocable—it immediately punishes a defection by itself defecting on the next play. However, it is also forgiving—it will resume cooperation right after the other player abandons defection. In short, it starts cooperatively and then reciprocates on its next play according to whatever the other player has done.

Nice strategies, Axelrod found, are in general much more successful than mean ones. And Tit-for-Tat tends to be most successful because of its capacity to establish stable cooperative relationships. While it punishes defection, it holds no grudges. It succeeds because it establishes so many cooperative relationships (and because it does not get suckered very often)—not because it beats players who are using other strategies. While the Tit-for-Tat player is self-interested, he is completely non-envious. He advances his interests by advancing the interests of others.

Space does not permit even a listing of the interesting results and implications of Axelrod's studies. I shall make do with one characteristic example. Escaping exploitation and sustaining cooperation require that individuals be able to discriminate among those with whom they have interacted in the past. Human beings seem particularly adept at interacting with a great number of their fellow beings. What historically has enabled us to do this with the necessary discrimination is the tremendous individualization of our appearances, especially our faces, and our tremendous capacity to identify individuals by their distinctive appearances. Thus, without fully recognizing it, Axelrod leads us to another level of insight into the beneficial effects of diversity and individualism.

Axelrod's work does have its weaknesses. These range from the absence of needed distinctions between different senses of cooperation and defection to, for example, errors about the utility of unilateral free trade. One of the more-systematic weaknesses is his failure to emphasize sufficiently how ongoing free-market relationships exemplify self-enforcing cooperation.

Axelrod does point out that business firms with ongoing relationships rarely need and, indeed, may actively avoid recourse to mechanisms of political enforcement. But one waits in vain for any mention of the self-policing mechanisms that, for example, sustained medieval economic fairs and that are universally found sustaining interactions within politically weak but economically active minorities. Nor does the author seem to appreciate the broad significance of the issue of spontaneous versus coerced cooperation for the choice between individualist-libertarian and collectivist-command social and economic structures.

With the exception of a passing reference to the anarchist theorist Michael Taylor, there is no recognition at all of whole schools of social theorists who have focused on cooperation in the absence of central authority. Consider, for instance, Peter Kropotkin on mutual aid and Herbert Spencer's conception of human evolution as progressive differentiation linked with greater capacity for peaceful cooperation. Most regrettably, Axelrod seems astonishingly unaware of the writings of F.A. Hayek, where one finds the most systematic, suggestive, and far-ranging discussions of the evolution of mutually beneficial economic patterns, social institutions, and even language and law itself without central authority and design. Axelrod's intriguing and invaluable work would have been even more fruitful had it been informed by the insights of Hayek and the various evolutionist, antiauthoritarian, and anti-statist traditions in social theory. Nevertheless, it remains a wonderfully thought-provoking and pleasing work.

Eric Mack teaches philosophy at Tulane University and is a contributor to the recently published volume Defending a Free Society.