Could Hobbes Trounce Hayek?

Freedom for the individual is often thought to mean chaos for society, but exciting new research shows how and why self-interest is served by cooperation.

|

What is the nature of human society when people are free to do as they choose? Seventeenth-century philosopher Thomas Hobbes had one answer. In the Leviathan, he wrote: "During the time men live without a common power to keep them in awe, they are in that condition which is called war, and such a war as is of every man against every man." Moreover, this condition of a war of all against all was one of spiritual and material poverty. He asserted it had: "No arts, no letters, no society, and, which is worst of all, continual fear and danger of violent death, and the life of man solitary, poor, nasty, brutish, and short."

For Hobbes, then, it was necessary that someone take upon himself the task of bringing order out of chaos by enforcing his rule over the rest. There might be many ways in which such a "law-giver" could legitimate his rule, but fundamentally some such rule was needed.

Contemporary economist and political theorist Friedrich A. Hayek has a more libertarian answer to the question of what kind of society people form under conditions of freedom. In his trilogy Law, Legislation, and Liberty, written in the 1970s, he said people form a "spontaneous order," which is the product of human action but not the product of human design. He distinguished between "made" orders, which are constructed to fulfill specific purposes, such as the order of an army, and the "spontaneous" order that he claims arises, without planning, from the free interaction of individuals.

Both writers make plausible cases for their respective positions. I am certainly more sympathetic to Hayek's views than to Hobbes's. Nevertheless, we still must ask: What does history actually tell us about the emergence of society, order, and the state? Contemporary anthropologist Elman R. Service, in Origins of the State and Civilization, asserts that in all cases known to archaeology or anthropology, states arose in response to external threats, not in response to civil strife. That is, they arose to prosecute a war against an external enemy, not to halt an internal war of all against all. Furthermore, he points out that if a ruler does intervene in a domestic dispute, the ruler will encounter the enmity of at least one side, and possibly both sides. He makes the point that "new or simple governments do not willingly or lightly undertake tasks that risk their power or authority." Hence governments established to prosecute a war are not going to move very aggressively to extend their power domestically.

What kind of external enemies might a government be established to defend against? Initially, governments were established to protect settled communities against nomadic raiders. Ultimately, as governments grew in scope, empires emerged and these imperial governments fought against rival empires.

In short, according to Service, before governments were ever formed, some kind of order had already been established, including specialization of functions such as priestly roles and adjudication of disputes by "wise men." Contrary to the notions of Hobbes, people without governments have not necessarily engaged in wars of all against all.

A specific illustration of this is given by Colin Renfrew in the November 1983 Scientific American. Renfrew, a British archaeologist, describes his excavation of a prehistoric tomb in the Orkney Islands off the coast of Scotland. This was an earth mound covering well-constructed sandstone burial chambers. The tomb was in use from about 3200 B.C. (older than the Pyramids of Egypt) to about 2650 B.C. The distribution of ages and sexes of the skeletons indicates it was an "equal access" burial site, open to the entire community. The absence of elaborate grave goods indicates there was little social stratification in the society that built it. In short, here is an example of social cooperation, persisting over more than half a millennium, without any central government to direct it.

But how can such an order arise? Hayek argues that we need to act on expectations about the actions of others. In order to plan, we must have fairly accurate notions of what others will do in response to each of the alternative actions we are considering. He goes on to say that a society can exist only if rules have evolved that make society possible. In particular, the kinds of rules that would evolve among people acting freely are those rules conducive to maintaining order. Adherence to other kinds of rules would mean the society would not survive, and the ineffective rules themselves would therefore also vanish.

Hayek's thesis is plausible, but it doesn't tell us exactly what kinds of rules will evolve, nor the conditions under which they will evolve. This is why the recent book The Evolution of Cooperation, by Robert Axelrod, is so important. Axelrod presents the results of his research, which reinforce Hayek's arguments about spontaneous order. Axelrod shows what kinds of rules might evolve under various conditions. In particular, he shows how a spontaneous order can arise under conditions of freedom.

The purpose of Axelrod's research was specifically to investigate the question, "Under what conditions will cooperation emerge among egoists without central authority?" His results are based on an "investigation of individuals who pursue their own self-interest without the aid of a central authority to force them to cooperate with each other." By assuming self-interest, he can examine the most difficult case, in which people are not concerned with the welfare of their partners in cooperation or with the welfare of a group of which they are a part.

Before looking at Axelrod's results, a slight digression is necessary. Students of conflict behavior have long studied a "game" called "the Prisoner's Dilemma." The name comes from a fable in which the police have captured two criminal conspirators. The police place them in separate cells before questioning them. The prosecutor tells both prisoners that he has enough evidence to convict each of them on a minor charge, which will get them a one-year sentence. If either one will give evidence against the other, the one doing so will be set free, while the other will get a 20-year sentence. If both give evidence against the other, however, both will receive five-year sentences.

The prisoners face a serious dilemma. If they cooperate and keep their mouths shut, they both get a light, one-year sentence that won't inconvenience them seriously. If one defects while the other cooperates, however, the defector goes free—an even better payoff than from mutual cooperation—while the cooperator gets 20 years, a serious penalty. The horns of the dilemma arise from the fact that regardless of what the other prisoner does, each prisoner is better off to defect. Thus the logical thing for both to do is defect, getting five years. Both thus escape the 20-year sentence but are much worse off than if they both had cooperated.

Now let's change the situation somewhat. Suppose we have two "players" who will play a series of games of the Prisoner's Dilemma, one after the other. The situation is no longer a one-shot affair, in which the logical thing to do is defect. In an "Iterated Prisoner's Dilemma," each player has the possibility of cooperating now in the hope of obtaining cooperation from the other in the future. If each could build up trust in the other, both could do quite well through cooperation. But this possibility of cooperation exists only if each expects to play the other again.

Iterated Prisoner's Dilemma (IPD for short) is really a thumbnail description of a situation common in society. Generally, we have repeated occasions to interact with others. In such circumstances, pursuit of narrow self-interest by each party leads to poor outcomes for all. In fact, the Iterated Prisoner's Dilemma is such a good case for analysis that, as Axelrod suggests, it has become as popular a subject of study in social psychology as the E. coli bacteria found in human intestines have become for biologists. True, IPD oversimplifies some things. It leaves out the possibility of communicating; it omits the possible intervention of third parties; and it ignores problems that might arise in determining whether the other party is actually cooperating or defecting (as, for instance, in an arms control agreement in which inspection is not permitted). Even so, it can provide some very helpful insights into the problems of cooperation and conflict in society.

When students of conflict discuss the Iterated Prisoner's Dilemma, they speak of "strategies." Ordinarily the word "strategy" implies something clever. As used by game theorists, however, the meaning is more prosaic. It simply means a prescription or recipe for what to do under all possible conditions. One possible strategy for playing IPD is called "Tit for Tat," or T4T. This strategy requires that one simply echo what the other player did last time. If he cooperated, you cooperate this time. If he defected last time, you defect this time. In particular, a player using T4T is never the first to defect. Other strategies include always defecting (ALL D) and always cooperating (ALL C). ALL C isn't a particularly clever strategy, since it invites the other player to exploit its user. RANDOM, simply flipping a coin, isn't a very clever strategy either, since it doesn't encourage the other player to cooperate. Nevertheless, these are all examples of strategies, because they give the player a complete prescription of what to do in every circumstance.

What Axelrod did was conduct a round-robin tournament in which different strategies could compete against each other in playing IPD. He asked game theorists to submit computer programs that would use whatever strategy the submitter thought would be most effective. These strategies were then "allowed" to play against each other on a computer, with each strategy being paired against every other strategy about 200 times. Axelrod specified numerical payoffs to each strategy depending upon what it did (cooperate or defect) and what its opponent did. Each strategy received a score equal to its cumulative "winnings" in all plays against all other strategies. The results of the tournament were published in professional journals, and a wider range of people were invited to submit programs (which could include programs from the first set), for a second tournament.

The objective of the tournament was to score as well as possible over a series of interactions. Doing better than the "opponent/partner" in a given interaction was much less important than cumulatively doing better than all the other strategies in the tournament. Since each pair of strategies would meet many times, the future was important.

The surprising result was that the "Tit for Tat" (T4T) strategy won hands down. It ranked first out of 63 different strategies in the second tournament (it also won the first tournament, against a smaller field). This is particularly surprising because T4T never tries to do better than the other player. Instead, it did so well overall because it was successful in getting the other players to cooperate.

The tournament also turned up another surprising result. Axelrod characterizes as "nice" the property of never defecting first, but instead defecting only in retaliation for the other player's defection. Of the strategies scoring in the top 15, only one wasn't "nice," and it came in eighth. By contrast, of the bottom 15 strategies, only one was "nice"; the rest tried to exploit the others by being "clever" about defecting first. The high-ranking strategies turned out to have four characteristics. First, they were nice—they cooperated until the other player defected. Second, they retaliated when the other player defected: they weren't "softies." Third, they forgave the other player after retaliating, and returned to cooperating. Fourth, their behavior pattern was clear, so the other player could adapt to it (RANDOM, in particular, lacks this last characteristic).

Next Axelrod tried an extension of the simple tournament. He tried simulating an ecological situation, in which each strategy "reproduced" in accordance with its degree of success. The purpose was to determine what strategies would come to dominate the "environment." Again, T4T won, in the sense that after several generations, the other strategies died out. The strategies that weren't "nice" turned out to destroy the very environment they needed to survive.

Something even more interesting showed up in this ecological simulation. Some of the strategies had been designed to be "clever." They attempted to exploit other strategies by setting them up for an unprovoked defection. These strategies turned out to be too clever for their own good. They couldn't exploit T4T, since it retaliated immediately, then forgave. As the "prey" of these "predator" strategies died out, the predators died out too.

Next Axelrod looked at the question of what happens to a society once it becomes "all T4T." Is it stable, or can another strategy successfully invade it, doing better than it does and "out-reproducing" it? The answer turns out to be that T4T is stable against invasion. A more successful strategy, in order to get a higher cumulative score than T4T, must defect from time to time. When it does defect without being provoked, it will then get the maximum payoff ("going free," in the original Prisoners' Dilemma fable), instead of the cooperative payoff, on that turn. But on the next turn, T4T will retaliate and the invader will get (at best) the payoff for mutual defection. So long as there is enough concern for the future, then, T4T is stable against invasion. No alternative strategy can exploit the situation by being "clever" about its defections.

This need for concern about the future explains why nomads would rather raid than trade. As they move about from one place to another, they have little incentive to try to cooperate with the people in the settled communities they encounter. Even if they did cooperate, it would be a long time before they came back, and they might not deal with the same people again. Likewise, the settled communities have little incentive to cooperate with the nomads, who won't be back for a long time. Thus if for any reason people have little concern about the next interaction with the same "partner," both have an incentive to defect rather than cooperate. T4T is stable only when the future is important enough to outweigh a one-shot gain.

Unfortunately, ALL D is also stable against invasion. That is, if everyone in a community is following an ALL D strategy (in effect, Hobbes's war of "every man against every man"), no other strategy can displace it by "doing better." Suppose a newcomer to such a community tries cooperating. On that turn, he will be victimized, and his partner will do even better than he would have done with mutual cooperation. If the newcomer continues to cooperate, he will continue to be victimized. His poor score means he won't "reproduce" himself as well as his exploiters do. If he retaliates, he in effect adopts ALL D himself.

If people never meet again, or if they don't put much value on the future, ALL D is the only logical strategy. Since ALL D is just as stable against invasion as T4T, how can cooperation ever get started in a primitive society, where people don't put much value on their future dealings with each other? How can the spontaneous order that Hayek says should arise, and which Service says actually did arise, come about?

Axelrod provides an answer to this question, too. The "cooperators" must exist in clusters. That is, an ALL D society cannot be invaded by individuals who practice T4T, but it can be invaded successfully by a cluster of individuals who practice T4T. If the people in the cluster have most of their dealings with each other, rather than with the rest of the population, they will do well enough through cooperation to survive being victimized in their few interactions outside the cluster. Moreover, T4T can thrive and grow in an ALL D population if the number practicing it is large enough, even though they have most of their dealings with ALL D players.

With the payoffs Axelrod used in his particular tournament, he found that 5 percent of the population practicing T4T was sufficient to invade and eventually take over an ALL D population (this number would be different for a different set of numerical payoffs). Even though the cooperators get victimized in most of their dealings, they will do better over the long run than will those who never cooperate but who are always victimizers and almost always victims. Thus if cooperation gets started in a cluster, eventually the cluster becomes large enough that its members no longer need to "stick together." Once that happens, the "nice" strategy can successfully invade even a Hobbesian society.

This result about clustering shows why family and clan are so important in primitive societies. They are natural clusters, within which people practice reciprocity. Even though clans may be at war with one another, within the clan an individual is assured of cooperation (and of eviction from the clan into the Hobbesian outer world if he betrays his clan).

Axelrod concludes that he has done what he set out to do. He has demonstrated that "mutual cooperation can emerge in a world of egoists without central control by starting with a cluster of individuals who rely on reciprocity." This is a very strong conclusion. It means that the growth of spontaneous order does not depend upon foresight, upon rationality, upon the ability to communicate, or upon altruism. The individuals involved need not even trust each other. What is needed is enough concern for the future that the payoff from a long series of mutually cooperative interactions outweighs a one-shot gain in the present. Once that exists, the potential for spontaneous order arises. After that, even a small cluster of cooperators can prosper because they have declared a truce, at least among themselves, in Hobbes's war of all against all.

For anyone who values individualism and limited government, this is a very important and fundamental result. It shows, not that Hobbes was universally wrong, but that he was not universally correct. It shows that Hayek's spontaneous order is not something accidental or requiring self-sacrifice, but instead something that can arise under a very plausible and likely set of conditions.

Axelrod's interesting work also shows the kinds of rules that are needed to maintain a libertarian society. A spontaneous order will grow and prosper in a society in which people can expect to interact with one another throughout an indefinite future, in which they value the cumulative total of those future interactions more than they value a one-shot gain in the present, and in which deviations from cooperation are punished promptly, then forgiven.

Joseph Martino is a technology forecaster at the Research Institute of the University of Dayton.