Hidden Order: How Adaptation Builds Complexity, by John Holland, New York: Addison-Wesley, 185 pages, $24.00/$12.00 paper
"On an ordinary day in New York City, Eleanor Petersson goes to her favorite specialty store to pick up a jar of pickled herring. She fully expects the herring to be there. Indeed, New Yorkers of all kinds consume vast stocks of foods of all kinds, with hardly a worry about continued supply….What enables cities to retain their coherence despite continual disruptions and a lack of central planning?"
With this question, John Holland begins his ambitious and intriguing, but often frustrating, book about spontaneously ordering systems. Holland is a professor of computer science and electrical engineering and of psychology at the University of Michigan, a recipient of the MacArthur Fellow award, and co-chairman of the Santa Fe Institute Steering Committee. (See "Complex Questions," January 1996.) In Hidden Order, he draws our attention to similarities among phenomena as diverse as the biological evolution of individual organisms and complex ecosystems; the functioning of the immune system; the way minds perceive and learn; the dynamics of the market economy; and some software systems of his invention, which learn to adapt to their environment. Though separate disciplines study these phenomena, Holland shows deep general principles that underlie them all. When fields separated by existing academic boundaries share unifying principles, the time is ripe for the creation of a new cross-cutting discipline.
Holland calls that discipline-in-formation the study of "complex adaptive systems," or cas. Like economists F.A. Hayek and Herbert Simon, he was one of the field's earliest contributors–and one of the best. Although Holland developed his ideas without knowledge of Hayek's work, the two scholars are wonderfully complementary. (Simon, whose work is beyond the scope of this review, is a leading artificial intelligence researcher and, like Hayek, a Nobel laureate in economics.) Whereas Holland started with machine learning and genetic evolution, and extended into psychology, epistemology, and symbiosis in ecosystems, Hayek started with psychology, evolution, and economics, and extended into epistemology, law, ethics, and culture.
By taking seriously the evolutionary, unplanned nature of markets, Hayek made seminal contributions to economics. These contributions help explain the nature of the knowledge learned by evolutionary processes, and how ecosystems can successfully self-organize to employ vast amounts of such knowledge. Though not well known by current neural-network researchers, Hayek's 1952 book The Sensory Order helped found (via the work of Frank Rosenblatt) that branch of machine-learning research. Taking different paths, Hayek and Holland came to a common notion of the territory they were exploring. Hayek also called for a new discipline to study cas, which he called "spontaneous orders."
Of all cas researchers, including Hayek, Holland is clearest that evolutionary learning is the important property shared by complex adaptive systems, and he has done the most to advance our general understanding of such learning. Unfortunately, though Hidden Order focuses on learning, speaks of the importance of applying cas insights to economics, and even employs idealized markets within the learning mechanism of classifier systems, it nowhere applies learning ideas to the study of markets.
Nonetheless, and to Holland's credit as one of the new discipline's progenitors, the book will encourage interested readers to consider the implications of his theory for how markets learn. Current economics focuses on markets as mechanisms for efficient allocation and distribution, and as arrangements providing freedom and rights, but it rarely examines the questions first raised by Hayek: how markets themselves learn, and how they successfully employ learned knowledge. Building on Holland's work could open up this important field of inquiry.
Holland came to complex adaptive systems through his work on machine learning–the effort to build artificial systems that learn by interacting with an environment. Using insights from biological evolution, he first invented "genetic algorithms," presented in his 1975 book Adaptation in Natural and Artificial Systems. Genetic algorithms have spawned an important branch of machine-learning research, complete with annual conferences, and are used in commercial software to do such complex tasks as grading wood and identifying fingerprints. Combining genetic algorithms with insights from cognitive psychology, epistemology, and economics, Holland went on to invent "classifier systems," an even more ambitious machine-learning architecture presented in his 1986 book Induction. By borrowing mechanisms from naturally occurring complex adaptive systems, and synthesizing them into machine-based systems, Holland hoped not only to find useful tools for computing but to gain new insights into natural cas.
In Hidden Order, Holland shifts his focus from machine learning to study of the general nature of complex adaptive systems. He starts by proposing seven characteristics, which he argues unify all cas. The book goes on to discuss Holland's three evolutionary software architectures: genetic algorithms, classifier systems, and a new one presented here for the first time, Echo. In Echo, complex creatures arise from symbiotic patterns formed from simpler creatures. These three systems are not introduced primarily to teach about machine learning, however, but to explore cas issues in a clear and concrete manner. Throughout, Holland sprinkles examples from a good mix of different cas, including markets. Finally, the book concludes with an attempted call-to-arms that presents potential contributions to economics as the primary motivation for studying cas and suggests some public policy implications.
One of Holland's principal purposes is to propose underlying characteristics common across all complex adaptive systems. He seeks to create a theoretical underpinning for the new field that will "separate fundamental characteristics from fascinating idiosyncrasies and incidental features," so that systematic research becomes possible. "Theory is crucial," he writes. "Serendipity may occasionally yield insight, but is unlikely to be a frequent visitor. Without theory, we make endless forays into uncharted badlands." With a workable theory, however, we can begin to ask and explore useful questions.
Holland begins, therefore, by organizing the properties and mechanisms that he argues are universal among cas. Very briefly, these are:
- Aggregation. Complexity emerges from interactions of simpler components, often themselves complex systems emergent from interactions of yet simpler components: Bodies are made of organs, made of tissues, made of cells.
- Tagging. Agents carry recognizable markers allowing other agents to suspect which ones have particular properties. Examples include trademarks, pheromones, and the immune system's ability to spot past invaders.
- Nonlinearity. Agents interact, rather than just adding together.
- Flows. Agents organize into networks of potential interactions. One interaction triggers another, causing effects to flow through these networks.
- Diversity. Agents evolve to fill diverse niches, which are defined by how the agent interacts with other agents. Niches usually outlive their current occupants, and the change of niches over time has a much greater effect on an ecosystem than changes to individual agents. For markets, if agents are businesses then niches are industries.
- Internal models. Agents experience internal changes that result from sensing an external world. Such changes bias actions toward those likely to be effective in a world that produced those sensations. These internal states are often a form of tacit knowledge of the world–they embody a discovery of how to exploit a regularity of the world without representing the regularity itself. Evolution adapted the eye to facts about optics, but nowhere in the eye can one find a representation or explanation of those facts.
- Building blocks. Components are reused for multiple purposes. This is the flip side of aggregation. When businesses are not vertically integrated, for instance, a supplier serves multiple companies. These independent suppliers learn less about any one company, instead learning more general lessons by serving a diversity of companies.
Though not quite as universal as Holland claims, these characteristics are universal enough to indicate general principles. Holland's three ecosystems provide clear examples of several of these.
Genetic algorithms search for solutions to hard problems by variation and selection of creatures representing proposed answers. Genetic algorithms are based on a simplification of genetic evolution: Each "creature" is essentially a single fixed-length chromosome–a string of computer symbols–whose "fitness" is rated according to the quality of answer it represents. Initially, one creates a population of creatures made of random chromosomes. The ratings then determine how many variations of each creature will form the next generation of the population. This procedure is repeated from one generation to the next.
For example, say a salesman must visit certain cities and needs a route that minimizes total distance. To use genetic algorithms to find a short route, the salesman generates a population where each creature is a randomly ordered list of these cities. To pose a problem, the salesman creates a reward function: When a creature's list is interpreted as a route, the shorter the route, the more offspring the creature has. Over generations, the quality of routes present in the population improves. The answers produced by genetic algorithms, while embedded in the rules governing the system, are often far from predictable. By mimicking genetic evolution, such systems are able to generate better solutions more quickly than conventional programs. They learn.
Sexual reproduction, in particular, leads to a form of learning that produces substantially faster search. In the sales-route example, when two creatures mate, their progeny inherit sub-sequences of cities from each parent, leading to the eventual combination of separately discovered good sub-routes. Combining parts of separately evolved answers produces rapid convergence on a good overall route. In effect, the sub-sequences are treated as proposed answers to possible sub-problems and serve as reusable building blocks for assembling ever larger answers. Learning proceeds by accumulation, improvement, and growth of these building blocks. Holland's analysis of sex also helps explain the power of combining partial solutions by entrepreneurship or interdisciplinary study.
Genetic algorithm research could shed new light on controversies about intellectual property rights. Evolution normally works by trying small variations on good ideas to search for better ones. The patent system inhibits exploration of small variations–because they would be infringing–biasing innovation toward taking large leaps. How does this affect overall progress? Although addressing such questions also requires careful institutional analysis, Holland gives us some useful new tools. Examining innovation as evolutionary learning would complement conventional analyses of free riders, monopolistic pricing, and transaction costs.
Classifier systems are motivated by a fundamental puzzle. Sensation necessitates classification into categories that, in turn, can only be learned by exposure to sensations. Can this be untangled? This same question motivated Hayek's The Sensory Order, and both Hayek and Holland start with the same observation: One part of a mind might sense and learn about other parts of a mind much as a whole mind senses and learns about the external world.
From here, Holland goes much further than Hayek. He constructs an ecosystem of creatures evolving by genetic algorithms in which a "chromosome" is interpreted as an if-then rule. The if part is a pattern that matches some combination of stimuli from the external world, as well as the then parts of other rules. When stimuli arrive, the classifier framework triggers those rules with closely matching if parts. The then parts of these rules are added to the "stimuli," triggering further rules. In this way, the mechanisms used to recognize patterns in raw stimuli are also used to recognize patterns of prior recognition events. The resulting uniformity enables sophisticated perceptions to grow on simpler ones.
Why do some perceptual categories prosper while others wither away? Classifier systems use market-like competition in a procedure for discovering which categories reflect regularities in the world. Some then parts trigger actions in the external world, much as minds trigger muscles. These external actions have consequences that may be good or bad. When judged good, the classifier framework "pays" the rule that triggered the action. This rule passes along some of the fee to the rules that triggered it–payment for "recognition subcontracting," as it were. These subcontractor rules, in turn, pay their subcontractors. This "flow" of payments retraces, in reverse order, the flow of causation that led from external stimuli to useful actions. Over time those "recognition businesses" not contributing to useful behavior go broke, while the others are getting paid and having sex.
Holland built this bridge between markets and learning in order to carry ideas from markets into classifier systems. What if we cross this bridge in the other direction? The network of relationships in a market economy–who contracts with whom, or even who knows whom–constantly shifts, adapting to complexity in the world. When Holland's classifier networks shift their connectivity because of payment flows, they also indirectly learn facts never known to any creature within the network. Perhaps human networks of trade learn facts not known to any individual. Indeed, once one understands classifiers, it becomes hard to suppose this is not the case. Though we cannot know these facts in particular, we may abstractly reason about the learning process, and come to realize that some of the rigidities imposed on markets (perhaps SEC constraints on investing) do vastly more damage to learning than others. Such differences would be missed by conventional social-cost and efficiency analysis.
Classifiers use tagging to determine which rules match which stimuli. The tag used to indicate a given property is arbitrary–why is cat the word for cat? A tag's "meaning" is only established by use and experience. Over time, tags evolve to profitably match recognition businesses and their potential customers, thereby reinforcing useful perceptual categories. This use of tagging suggests how to apply some of Holland's work to markets and language: A city contains many buyers and sellers, often trying to find each other. Once it becomes known that Castro Street is good for Chinese food, or Lawrence Expressway for computer equipment, buyers and sellers know where to meet. How do these meeting places emerge, and how does their character shift over time? The economist Thomas Schelling, in his book The Strategy of Conflict, describes a game played on students–they would each receive $1,000 if they met in New York without prior communication. Many went to the clock at Grand Central Station because they expected it to be a mutually vivid choice. As these expectations change, the profitable places to meet shift. As places shift, new expectations are learned.
Similarly, language involves speakers and listeners using words to try to mean the same thing. Language evolves to provide for broad agreement on a word's meaning, as well as subtle shifts of meaning over time. If words are considered as places in a space of possible sounds, we can think of the problem of agreeing on a word as one of selecting a meeting place.
Are the causes and dynamics of shifts in a word's meaning similar to the change in character of a shopping area? How do diverse incentives interact to pull a tag in different directions? Might some terminology shifts be parasitic mimicry phenomena, like the perpetual need for new euphemisms as old ones are used up? By providing a common conceptual framework across systems this different, and by mixing their metaphors with great agility, Holland provokes such cross-disciplinary questions even when his book does not ask them. To establish a new discipline, rather than a collection of somewhat related but ultimately independent fields, requires raising such questions.
Echo is a richer but less mature ecosystem designed to explore the emergence of complex aggregations, such as multi-cellular organisms or corporations. While it's too early to tell how well it will run, Echo demonstrates Holland's ability, decade after decade, to raise and explore important new questions. Evolution operates on information patterns that can replicate, he says, but what about the learning involved in growing a symbiotic arrangement? The creatures within this arrangement can replicate and evolve, but what about the arrangement itself? Echo seeks to probe such issues. It is a model of how symbiotic arrangements can be templates for forming larger, more complex, replicable creatures.
To realize this, Echo introduces adhesions, boundaries, and conditional replication. Symbiotes sufficiently interdependent come to adhere to each other, and a set of such closely coupled creatures may form a boundary–interior creatures are no longer available for interaction with outsiders, and so no longer need to be prepared for these interactions. In a differentiation process inspired by how embryos develop, once a boundary forms, the resulting "multi-agent" grows by replicating its component creatures into positions that approximately replicate their original relationships. With this transformation, the learning embodied in the structure of the arrangement becomes subject to normal evolutionary processes. Some of the organelles in our cells, such as mitochondria, started out as independent creatures. Plausibly, many vertically integrated companies form by "copying" a spontaneously grown pattern of subcontracting.
Anyone brave enough to attempt broad interdisciplinary work faces the danger of saying foolish things outside their area of expertise. Holland's courage is to be praised, but his book errs when relating his insights to economics. Were economics treated only as one cas example among many, these errors would not matter so much. However, the book's motivation relies on economics, so these errors must be dealt with.
The main discussion of flows, for instance, speaks of material flowing through a system, and the need for recycling to maintain a high concentration and avoid shortages. The biological ecosystem, however, is the only cas Holland presents for which the relevant flows are subject to material shortages. For all the others, flows are information signals and the notion of shortages makes no sense. The only attempt to show that the recycling issue is general uses a naive Keynesian analysis of money. Holland never mentions that the shortage goes away when the value of money changes. And his own ecosystems engage in sophisticated flows that make no use of recycling.
On a similar note, he states that the "tragedy of the commons" occurs simply "because each person mistrusts the moderation of others," omitting any discussion of the ways in which private property rights avoid such overuse of resources. To explain that cas suffer from problems that might be more generally investigated, the book repeatedly mentions viruses and trade balances. I can understand why viruses might be problematic, but trade balances?
Surprisingly, what Holland presents as the chief contribution his work can make in economics is to help us identify "lever points" for fixing economic problems. With better insights into how economies function, he imagines we might find those key interventions for fixing various problems. (His example is a Depression-era make-work program, the Civilian Conservation Corps, presented with scant evidence of net benefit.) This is the familiar central-planning fantasy, and everything the book says on the matter has already been well answered by Hayek.
Indeed, it has even been answered by Holland's own research: He built his computational ecosystems to develop according to their internal principles, not to enable himself to meddle and nudge. Each of his systems rests on a simple framework of rules–one might say a constitutional framework–designed to allow evolution without outside intervention. Were he to find himself manipulating "lever points" to keep his system on track, he would regard that as a bug to be fixed.
Contrast this with the system where he does propose such "outside" intervention–the market. In the market, the intervener is of no greater intelligence than the creatures populating the system, and of substantially lesser intelligence than the system as a whole. Worse, the information within the system cannot be gathered together. This is the Hayekian "knowledge problem," and it cannot be sufficiently emphasized. No one can ever succeed, no matter how totalitarian their control, at bringing together in one place the dispersed information that individuals and market structures are locally adapting to but are mostly unaware of. Holland endows his learning systems with a large population so the resulting system can learn more, and can behave in ways that take more knowledge into account. If a smaller population of creatures were given levers for controlling a larger population, this would simply reduce the aggregate intelligence of the system toward that of the smaller population.
Holland does not appreciate that there is no outside for the market. Any individual or group, given the power to wield a lever arm over the market, would become part of a system of incentives coupled to that market. This is the "public choice" insight. The resulting system is still one of feedback and incentives–now including the intervener–but no longer is it simply a market system. Given the evidence of history, the burden of proof is on those who propose mixed systems to explain why they would be an improvement. Perhaps Holland is too used to interacting with laboratory ecosystems, where he truly is outside, to have a sense of the force of this problem.
But even in his laboratory, should he attempt such outside intervention, he would still find himself suffering from the Hayekian knowledge problem, even though all the information literally is gathered together and available in one place. He would find that the system had come to embody, by adaptation, knowledge about itself to which he had no practical access. Indeed, much of the attraction of the spontaneous ordering approach to machine learning is that–in large measure because of Holland–we can get machines to adapt to complex phenomena we ourselves have difficulty figuring out.
The contrasting approach–having machines form explicit representations of the lessons they learn–has not resulted in much learning; and only these representations could have made the accumulated knowledge visible to an observer. Research to date shows that figuring out what an adaptive system has learned is astonishingly more difficult than the learning itself. Even with full access and control of the entire state of the machine, an intelligence orders of magnitude greater than the creatures populating the machine or of the machine as a whole, and from a position truly insulated from the system's incentives, Holland would still find it extremely hard to intervene to good effect, and he is wise not to try.
Despite the problems, Holland does a fine job proposing a discipline to study these questions and principles, and seeding it with his own research. His three software systems are powerful insight generators for all cas, not least because they remove the ghost from the machine. By stepping through the entire mechanisms of a few full evolutionary learning systems, we see there is no magic. Each is a system whose logic we can hold in our heads, thereby enabling thought experiments. Applying the resulting clarity of insight to the examination of other cas leads to better understandings of each and of the relationships among them.
The interdisciplinary investigation Holland is pursuing into the nature of spontaneously ordering evolutionary learning systems is the research program Hayek had earlier proposed. Hayek did much early original work and clearly would have loved to do more. However, as Thomas Jefferson said, "We shed our blood…that our children may be philosophers." Having seen the horrors statism leads to, Hayek shed much of his life warning us, rather than advancing his early groundbreaking work. This research program is finally being pursued, but, ironically, mostly by researchers unaware of Hayek's work. At the same time, those most influenced by Hayek still fight the old fights. Now that the tide has turned in the struggle to free minds and free markets, it is time to join these bold researchers in the quest to understand both.
Mark S. Miller is one of the pioneers of the field of agoric computation–using idealized markets as the foundation of secure distributed general-purpose computing. He works for Electric Communities in Cupertino, California, and can be found at http://www.caplet.com. He thanks Bill Tulloh, K. Eric Drexler, and others for their ideas and assistance with this article.