Politics

Complex Questions

The new science of sponstaneous order

|


About a year ago Stephanie Forrest of the University of New Mexico and Terry Jones of the Santa Fe Institute–the nerve center of "complexity" research–were fooling around with ECHO, an artificial evolution program developed by Chris Langton, the father of "artificial life." Like most research in the emerging field of complexity, ECHO involves creating "digital organisms," small snippets of computer code equipped with instructions to help them survive and replicate in the silicon environment. When turned loose in cyberspace, these computer creatures act very much like living organisms. They can be programmed to compete for scarce resources, trade genes through sexual interactions, fight or trade with other organisms, and–in a fast-forward of evolution–reproduce themselves over thousands of generations.

The particular program that Jones and Forrest were tuning was designed to mimic biological evolution. After a while, they began noticing an odd pattern. "No matter what organisms we started with, a few species would become predominant while the vast majority remained scarce," says Jones. "The species might change places occasionally, but the pattern always remained the same. Even programs that had bugs in them produced the same configuration."

Their curiosity piqued, Jones and Forrest showed their results to Jim Brown, an ecologist at the University of New Mexico. His eyes opened wide. "That's Preston's curve," said the startled Brown. "It's a well-known ratio in ecology which says that most species remain small while only a few species widely proliferate."

The replication of Preston's curve in the computer is typical of discoveries being made today in the field of complexity. At times, the insights seem sublime, as if the divine architecture of the universe were being revealed. Then again at others, the research is disappointing: "The major discovery to emerge from the [Santa Fe] Institute thus far is that 'it's very hard to do science on complex systems,'" writes John Horgan of Scientific American, quoting University of Chicago mathematical biologist Jack D. Cowan. Neither the promise nor the disappointment is surprising, however. Complexity is a young field, examining difficult problems with new tools. Its results are often remarkable, deepening our understanding of how the world works (or might work). But they are also quite preliminary and subject to conflicting interpretations, especially when applied to social systems.

Complexity theory explores how systems that are open to sources of energy are able to raise themselves to higher levels of self-organization. Oil droplets in water will arrange themselves into a sphere that is hollow inside. Snowflakes form crystals that always have a hexagonal design. Previously these patterns had been thought of as coincidences or curiosities. But beginning with Ilya Prigogine, the 1977 Nobel laureate in chemistry, theorists have been arguing that complex systems are capable of making unexpectedly large leaps in self-organization that allow them to maintain a precarious non-equilibrium.

This ordered non-equilibrium forms a temporary stay against the dreaded Second Law of Thermodynamics, which says that any order in a closed system will eventually dissipate–a phenomenon called "increasing entropy." Water and ink mixed together, for example, eventually turn a uniform light blue. Pockets of hot and cold air in a room will become uniformly lukewarm. At bottom, the logic is that there are only a few orderly states and many, many more messy or disordered ones. The chances of achieving an orderly state by accident are vanishingly small.

Popular alarmists from Henry Adams to Jeremy Rifkin have preached pessimism by misusing the Second Law to argue that life on Earth is quickly winding down. (Unaware of nuclear energy, Adams predicted that the sun's core would burn out in a few thousand years.) But Earth is not a closed thermodynamic system–and the universe may not be either. Life on Earth lives off the constant energy provided by the sun. This allows the biosphere to pile up order in the form of complex organic molecules, stored genetic information, and stable ecosystems. As for the universe itself, it is still living off the energy of the Big Bang. Just where that energy came from and whether it will eventually dissipate to thermodynamic equilibrium is still being debated by the cosmologists. But concerns about the universe winding down into a uniform state of low-grade energy are premature at best.

Says Harold Morowitz, editor-in-chief of Complexity, a bi-monthly journal that published its first issue in September: "Entropy is only defined for closed, equilibrium systems. The Earth is not a closed system and for all we know, the universe may not be, either. But as far as non-equilibrium systems are concerned, any system that is in something other than its most random state, as a result of boundary conditions or energy flowing through the system, is experiencing some kind of spontaneous order."

Enthusiasts point to examples of spontaneous order almost everywhere–the gathering of winds into a hurricane, the self-replication abilities of DNA, the interlocking relationships of an ecosystem, the immune system, the human mind, the development of language and common law, the evolution of human cultures. All are representative of the slow, cumulative development of "autocatalytic" patterns and positive feedback loops that eventually emerge as "complex, adaptive systems"–organisms capable of sustaining their identity while constantly changing to meet new environments.

As Stuart Kauffman, a MacArthur fellow and Santa Fe Institute star, puts it: "I read about entropy and I expect a world filled with disorder. Yet I look outside my window and see nothing but order. It is the order that needs explaining."

In his first brilliant discovery 20 years ago, for example, Kauffman created a system of 100 computer-dwelling "genes," all turning each other on and off, just as real genes are believed to do in any living organism. Although the situation appeared completely chaotic, Kauffman discovered that when each gene was controlled by only two other genes, the on-off patterns would settle down into about 10 repeating cycles, or "basins of attraction." He posited that these basins corresponded to the different cell types that are produced by the same set of genes in every multicellular organism. In one of those sublime revelations that dot the history of complexity research, Kauffman later discovered that he had duplicated a long-known principle of genetics: that the number of cell types in an organism is roughly equal to the square root of the number of its genes.

"The simplicity of this result astonished me at the time," he says. "It still astonishes me."

While everyone in complexity research agrees that spontaneous order exists throughout the cosmos, no one seems to be able to agree on exactly what that implies. "The consensus right now is that there is no one thing called complexity theory," says Michael Simmons, director of research at Santa Fe. "Talk to any person who is working in the field and they'll give you a different definition."

One area of agreement is the concept of "algorithmic information content," or AIC. This refers to the length of the instructions needed to generate the solution to a problem. AIC can be very small or very large. The instruction: "Print 1 followed by a trillion zeros" produces a very large outcome with only one simple instruction. The instruction: "Print 178983698068954348230…" up to a trillion randomly selected numbers would require an AIC larger than the answer itself.

It is only in the middle range that things start to get interesting. Suppose we tell the computer: "Print a million numbers at random. Now scan the string and find how many times the numbers '1234567890' appear in succession. Remove these numbers and scramble the remaining numbers again. Repeat step 2. Repeat step 3. Continue until the string does not occur. Count the digits in the remaining number. Print the answer." The computer will have to do a lot more "thinking" to complete the instructions.

Charles Bennett of IBM has defined this amount of thinking a computing machine has to do to generate a printout as "depth." Neither a perfectly ordered system ("Print 1 followed by a trillion zeros") or a perfectly disordered one ("Print 1823749567819074823719…") has much depth to it. Instead, depth–which is essentially a measure of complexity–occurs in the middle range. This middle area has been defined as a "phase transition" between order and chaos.

Systems become "deeper" and more complex as elements of time and memory are added. A perfectly elastic ball bouncing on a table has no memory and can be described almost perfectly by the linear equations of physics. A hurricane gathering force, on the other hand, is nonlinear and virtually impossible to describe in mathematical terms. Rather than complex, it is "chaotic."

Although chaotic systems are inherently unpredictable, they can be described by certain laws, the most important being the "butterfly effect," which says that small changes in initial conditions can produce enormous changes in outcome. (The term comes from the piquant observation that, since weather patterns are chaotic, a butterfly flapping its wings in Mexico today can change the weather in Chicago next month.)

Chemical reactions are complex without necessarily being chaotic. Take a container filled with hydrogen and oxygen atoms in a 2-to-1 ratio and certain predictions can be made. Still, it will make an enormous difference whether the atoms are already bonded to form water molecules or whether they are ionized and volatile. The system has memory and therefore requires more information to describe it.

Biology is more complex again. Biological systems all obey the laws of physics and chemistry, yet still have qualities that cannot be derived from physics and chemistry. To understand Preston's curve, for example, you have to know something about the actions of biological organisms within evolutionary time.

In his book The Quark and the Jaguar, physicist Murray Gell-Mann, who won the Nobel Prize for his quark theory, posits that complexity explains why all the action of the universe cannot simply be reduced to a few fundamental rules. Or, to put it differently, complexity theory says that, given a complete knowledge of the state of the universe at its inception and given a complete knowledge of the laws of physics and chemistry, it would still be impossible to predict the path of life on Earth.

"I know of no serious scientist who believes that there are special chemical forces that do not arise from underlying physical forces," writes Gell-Mann. "In that sense, we are all reductionists. But the very fact that chemistry is more special than elementary particle physics, applying only under the particular conditions that allow chemical phenomena to occur, means that information about those special conditions must be fed into the equations."

The differential applies even more strongly to biology. "To begin with, many features common to all life on Earth may be the result of accidents that occurred early in the history of life on the planet but could have turned out differently," writes Gell-Mann. In that case, terrestrial life has taken on a good deal of effective complexity, since a great deal of new information would have to be plugged into the equations to describe its outcome.

Even if terrestrial biology is not unique, it still contains a good deal of depth. "The science of biology is very much more complex than fundamental physics because so many of the regularities of terrestrial biology arise from chance events as well as fundamental laws," writes Gell-Mann. Economics, psychology, and human cultures have even greater depth and complexity than biology, simply because they are based on human beings' biological nature but also go beyond it.

Gell-Mann says he formulated this hierarchy partly in response to the reductionism he found among his colleagues at Caltech. Many specialists argued that the human mind had no unique qualities that could not be described by the workings of electrical impulses and organic chemistry. "Some refused to talk about the mind at all," he recalls. "One friend of mine called it 'the M-word.'"

Now the anti-reductionists are on the counterattack. Says Kauffman: "The past three centuries of science have been predominantly reductionist, attempting to break complex systems into simple parts, and those parts, in turn, into simpler parts. This program has been spectacularly successful, but it has also left a vacuum. How do we use the information gleaned about the parts to build a theory of the whole? The complex whole, in a completely non-mystical way, can often exhibit collective properties–'emergent' features–that are lawful and unique in their own right. These properties constitute the 'self-organization' or 'spontaneous order' of the system."

Kauffman posits that the spontaneous order being discovered in natural systems makes the genesis of life seem natural, even expected. Rather than the blind, one-shot miracle implied by Darwin's theory of natural selection, life on Earth may be the orderly and expected outcome of nature's inherent tendencies toward self-organization. "Natural selection is important, but it has not labored alone to create the fine architectures of the biosphere," says Kauffman. "I believe the order of the biological world's not merely tinkered, but arises naturally and spontaneously because of laws of complexity that we are just beginning to understand."

In the recently published At Home in the Universe, a popular account of his ideas, Kauffman is even more enthusiastic: "Profound order is being discovered in large, complex, and apparently random systems. I believe that this emergent order underlies not only the origin of life itself, but much of the order seen in organisms today….If all this is true, what a revision of the Darwinian worldview will lie before us! Not we the accidental, but we the expected!"

Nor does Kauffman shrink from the obvious implications of this discovery:

"Most important of all, if this is true, life is vastly more probable than we have supposed. Not only are we at home in the universe, but we are far more likely to share it with as yet unknown companions."

Carl Sagan, take heart.

Complexity research has quickly spread from the natural to the social sciences. "We are more convinced than ever that our greatest impact is going to be in the social sciences," says Santa Fe research director Simmons. "In the evolution of human culture and the adaptive learning that goes on within large organizations we expect to see significant discoveries very soon."

The most striking result so far: As computers have encountered complex systems, it has become clear that it is difficult, if not impossible, to plan their outcomes centrally. John Holland, the pioneering University of Michigan mathematician, built the first computerized "complex adaptive system" by mimicking the free market. Looking for a way to model the human brain, Holland found that efforts to create a "command-and-control" model had already failed at MIT. So he solved the problem by making each digital "synapse" into an economic unit. Individual synapses were paid off for solving problems, with a "bucket brigade" to distribute the rewards to participating units.

"Economic reinforcement via the profit motive was an enormously powerful organizing force, in much the same way that Adam Smith's Invisible Hand was enormously powerful in the real economy," writes Mitchell Waldrop in Complexity, a 1992 book on the work at Santa Fe. "When you thought about it, in fact, the Darwinian metaphor and the Adam Smith metaphor fit together quite nicely."

Computer experts at the Palo Alto Research Center discovered the same principle in 1988 when Xerox tried setting up a system to maximize use of its computers. Engineers had tried a "command-and-control" system, but found it unworkable. The information needed to coordinate decisions quickly overwhelmed the central processor. So Xerox researchers invented SPAWN, a system in which individual computers are given "money" and instructed to maximize their bank accounts by taking on tasks and trading computer downtime among themselves. Without any external direction or control, the computers quickly optimize their own use by trading on this internally created market.

"All complex biological and economic systems work this way," says Tad Hogue, a member of the research staff at Xerox PARC. "If every human cell's protein production had to be processed through the brain, the costs of coordination would quickly overwhelm the nerve cells' capacities. Consequently, most decisions are made within the cell, or by the internal communications of the endocrinal system, which bypasses most brain functions. Although we tend to think of ourselves being in complete command of our bodies, most of life's choices are made within the individual cells."

Stuart Kauffman advances the "patches" principle for solving problems in large organizations. Trying to find the best way for a large entity to solve a complex mathematical problem, Kauffman shows that if the entity is broken up into a grid of small "patches" and each patch is allowed to solve its own small portion of the problem, an optimal solution emerges. Kauffman even invokes Adam Smith by way of explanation: "Here we have another invisible hand in operation. When the system is broken into well-chosen patches, each adapts for its own selfish benefit, yet the joint effect is to achieve very good [solutions] for the whole lattice of patches. No central administrator coordinates behavior. Properly chosen patches, each acting selfishly, achieve the coordination." He suggests the finding has enormous implications for federalism and for decentralizing large corporate units.

On the surface, the computer-assisted discovery of spontaneous order would appear to be a triumphant vindication of libertarian social theory in general and the Austrian School of economics in particular. The term "spontaneous order," after all, was coined by Friedrich A. Hayek in his 1960 classic, The Constitution of Liberty. For most of this century, free market economists have labored to convince people that "economic planning" isn't necessary. Left to itself, they argued, the market economy will spontaneously optimize the wishes and desires of its participants–even though all are only pursuing their own self-interest.

"I am convinced that if [the market system] were the result of human design, it would be acclaimed as one of the greatest triumphs of the human mind," wrote Hayek in his landmark essay, "The Uses of Knowledge in Society" (1945). "Its misfortune is the double one that it is not the product of human design and that the people guided by it usually do not know why they are made to do what they do."

Yet for all their efforts in defending the spontaneous order of the market, neither the Austrians nor their followers ever articulated in convincing mathematical terms just how spontaneous order arises. Astonishingly, the complexity theorists, with their digital organisms and time-compressing computer models, now seem to have provided the answer.

Brian Arthur, head of economics research at Santa Fe, readily acknowledges this precedence. "Right after we published our first findings, we started getting letters from all over the country saying, 'You know, all you guys have done is rediscover Austrian economics,'" says Arthur, sitting in his book-lined offices at the Santa Fe Institute's sun-drenched hilltop mansion. "I admit I wasn't familiar with Hayek and von Mises at the time. But now that I've read them, I can see that this is essentially true."

Yet all this has not prevented Arthur and the other economists at Santa Fe from turning complexity into a rationale for government intervention. Arthur's basic concepts are "increasing returns" and "path dependence." In a series of papers (particularly "Increasing Returns and Path Dependence in the Economy," published in 1993), he has argued that–rather than the old Ricardo/Malthusian pessimism about "diminishing returns"–technological innovation in the economy can produce cascading improvements that improve productivity beyond anyone's wildest dreams. Improvement leads to improvement, which interlock in an ever more interdependent network of improvements–such as railroads leading to better transportation of food leading to improvements in agricultural productivity.

But Arthur also argues that, once undertaken, paths of innovation may lead to "technological lock-in": Economies and societies may get frozen on technological paths that later become unproductive. He likes to cite the QWERTY system on the typewriter keyboard and the triumph of VHS over Betamax as instances where a possibly inferior technology has become enshrined by the market.

"Economies are complex systems and once they adapt it becomes difficult to change direction," says Arthur. "Nobody makes Betamax movies because people don't have the home equipment to show them and people don't buy the home equipment because nobody makes Betamax movies." To overcome this inertia, he argues, government intervention may be necessary. (The specific examples of QWERTY and Betamax have been examined in detail, and convincingly debunked, by economists Stan Liebowitz of the University of Texas at Dallas and Stephen Margolis of North Carolina State University.)

Arthur's work furnishes the basic principles for a "white paper" submitted to federal Judge Stanley Sporkin in November 1994, arguing that allowing Microsoft to buy Intuit, which makes personal-finance software, would lead to dire consequences, "so as to constitute a threat to the underpinnings of a free society." In that context, "increasing returns"–rather than a reason for technological optimism–becomes a two-by-four with which to hit any company that itself may experience increasing returns from a new technology, and thus develop a monopoly.

So if complexity research breeds so much respect for self-organization–putting digital flesh on the skeleton of Adam Smith's invisible hand–why do its economists so often end up defending the visible hand of the government? Part of the answer seems to be in the "path dependence" of the Santa Fe program itself. The researchers bring their own political philosophies and assumptions to their work, driving not so much the research itself but its interpretation. The founder of the economics program was Kenneth Arrow, a Nobel Prize winner and dedicated Keynesian. Gell-Mann is a devoted environmentalist and co-founder of the World Resources Institute–as co-chairman of Sante Fe's science board, he has tried to relate the institute's work to rainforest preservation and other environmentalist concerns. "Shoot Newt" reads graffiti on an institute blackboard.

At the heart of complexity theory, however, lies the notion of freely evolving systems, including social and economic systems. Kauffman's work is pregnant with support for individual liberty and free institutions. John Holland's decision to model the human brain in the free market may be seen as a paradigm for the era. The problem seems to be that complexity theory, as applied to economics, has been developed by researchers who already had a strong predilection for activist government. And the theory isn't well developed enough to say much, if anything, about public policy. Complexity research is still far more descriptive than prescriptive. Its main contribution to our understanding of markets is the evidence that order can arise without central direction. Complexity research addresses one of the hardest obstacles faced by advocates of free markets: what Thomas Sowell has called "the intentional fallacy," the notion that someone must be in charge.

Says Michael Rothschild, the former managment consultant who wrote Bionomics to popularize his own ideas about "economy as ecology": "The great anxiety people have about the free market is that nobody appears to be running it. There's a psychological certainty in the idea that somebody–the president, the chairman of the Federal Reserve Board–is 'at the helm.' Perhaps the hardest thing for people to grasp is that the economy has the properties of a living organism–it runs by itself."

Complexity theory's elegant demonstrations should help make these often counterintuitive concepts more acceptable to the general public. Yet none of this constitutes a moral proof for choosing individual liberty. In the end, people have to make that choice themselves.

William Tucker is co-writer, with Newt Gingrich, of To Renew America (HarperCollins). This is the final article in a multi-author series on science and society.