World

We Want… Information

A modest proposal for intelligence

|

Over the last year, the old joke that "military intelligence" is an oxymoron has begun to seem signally unfunny. There was, of course, the little matter of the failure to prevent a couple of passenger jets from crashing into a pair of tallish Manhattan buildings, despite vague warnings that terrorist hijackings might be in the works. But it was during the run up to the Iraq war that our fearless leaders got so much wrong that if John McLaughlin were ever forced to summarize the argument, he'd surely have a coronary. There were those infamous sixteen words that made "yellowcake uranium" a buzzword for a few weeks, and eventually led to George Tenet's admission that the claim Iraq had tried to acquire it from Africa shouldn't have been allowed to remain in the State of the Union address. There were those deadly aluminum tubes which Saddam was going to use for a centrifuge, except that he probably wasn't. There was the assumption that we'd be welcomed as liberators, based largely on the say-so of an exile who's turned out to be a self-serving serial truth-bender at best and a shill for the Council of Guardians at worst. There was the notorious Feith memo inferring a Saddam-Osama love child on the basis of a few lunch dates, which gave rise to the Clintonian debate about how to define a "relationship" that rages on even now. Though we did at least find those huge stockpiles of weapons of mass destruction. (Huh? What's that?) Ah. Well crap.

The standard response to these spectacular failures has been to suggest that we need better information sharing and more accountability in the intelligence agencies. And while I'm not ornery enough to disagree entirely with such a common sense notion, I do want to say at least a qualified word in favor of more ignorance and less individual responsibility.

Both University of Chicago legal scholar Cass Sunstein and the New Yorker's James Surowiecki have recently penned books treating the fascinating problem of information cascades. Sunstein and Surowiecki both cite research by economists Angela Hung and Charles Plott based on a series of experiments in group decision making. What the pair learned by asking people to pick marbles out of urns may, believe it or not, shed important light on how to get our intelligence analysts to make the right choice in far more important contexts.

The experimental setup was simple: Subjects were told that they would be picking marbles from one of two urns. Each urn contained a mix of dark and light colored marbles, but urn A had many more light than dark marbles, while urn B had many more dark than light ones. Each subject would pluck a marble from the urn, showing it to nobody, and then, in sequence, they made guesses as to which urn they were picking from, without showing the other group members what marble they'd seen. Each member of the group stood to win a few bucks if she guessed correctly.

The problem was this: If the first couple of people had picked, say, a dark marble, even though they were picking from urn A, they'd reasonably enough guess that it was urn B. But when the next person down the line got a light marble, she'd equally rationally conclude that the evidence provided by the previous two guessers outweighed that provided by her own marble. Because subsequent guessers were (rationally) playing follow-the-leader, they failed to reveal the private information provided by their own selections, leading everyone to act as though they'd seen a dark marble, even if every player after the first had chosen a light one.

The researchers found that there were several ways to improve group performance. One, obviously enough, was to have everybody announce his guess independently and simultaneously, making it impossible for anyone to fall in line with previous guesses. The other was to reward people when a majority of the group as a whole made the right choice rather than paying people when their individual guesses were correct. That gave everyone an incentive to dissent if their own private information seemed to justify deviating from the previous majority.

We now know that there were skeptics within the intelligence community about the major false claims that were made before the war. So why didn't their voices get heard? The information cascade effect seen in the urn experiment provides a clue, and it's almost certain that the hierarchical structure of intelligence only exacerbated the effect. Readers of Robert Anton Wilson will be familiar with what he's called the SNAFU Principle: Because subordinates tend to tell superiors what they want to hear, the higher up any hierarchical ladder you go, the more distorted the picture becomes. The person with the most authority in the system will likely be the most ignorant—even when it isn't George W. Bush.

It was all too clear what the administration wanted to hear. In retrospect, the decision to establish an Office of Special Plans designed to counter the CIA's reluctance to discover what hawks knew must be true about Iraq, all but guaranteed a tidal wave of bad information. But that's not to say all that misleading info was intentional disinformation: There were very likely cascade effects at work.

There may be a special problem with "accountability" in intelligence work: If you're wrong when the majority gets it wrong, you're unlikely to get singled out, but if you dissent from an accurate consensus, the mistake is much more likely to get noticed. One way to break cascades, then, is to leave analysts feeling free to draw conclusions that run against the grain on the basis of the specific information they're studying, even if it seems to them that on balance their info is an aberration. Another is to share raw data, yes, and independent conclusions based on that data at the end of the process, but to insulate analysts from the previous conclusions of their peers while they're deciding how to interpret new pieces of intelligence.

Often, playing follow-the-leader is a perfectly rational strategy. We all have limited time and information, and it makes sense to assume in many contexts that if most people like a certain restaurant or avoid a particular make of automobile, you'll do well by making the same choice. But the intelligence failures of the last several years are a potent reminder that, every now and again, it's a good idea to check whether you're following the leader off a cliff.