The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

Is the Supreme Court allergic to math?

|

The Supreme Court in 2012. (J. Scott Applewhite/Associated Press)

Yes, says Oliver Roeder, in an interesting essay at fivethirtyeight.com. At least some of the justices, he suggests, have "a reluctance—even an allergy—to taking math and statistics seriously," as evidenced most recently by their questions and comments at the Oct. 3 oral argument in the Supreme Court's Gill v. Whitford "partisan gerrymandering" case.

As he himself acknowledges, he's hardly the first to make the suggestion; there's a rather substantial library of academic commentary on "innumeracy" at the court (and, more generally, throughout the judiciary). But I think he's correct in pointing out the rather serious consequences this might have in the particular context of the court's deliberations in Gill.

I have long been struck by the fact that it is unfortunately well within the norms of our legal culture—among lawyers, judges, law professors and law students—to treat mathematics and related disciplines as kinds of communicable diseases with which we want no part. I wish I had a nickel for every time I heard a student or colleague or member of the bar say something like, "Oh, math! I don't do math—that's why I went to law school!" It is well known that the surest way to place three-quarters of your audience into shutdown mode in a law school class or legal conference is to introduce a formula, or a graph, or the results of some calculation.

Gill, as most of you are probably aware, involves a challenge to the Republican-dominated Wisconsin legislature's 2011 redistricting map, a map that, according to the three-judge panel below, was both intended to, and did, systematically disadvantage Democratic voters and advantage Republican voters across the state. As evidence of the intent of the map's drafter to "secure Republican control of the legislature for the decennial period," the court noted that the redistricting committee had prepared a number of different maps, and that the one finally chosen was the one that, in the opinion of those who had constructed it, was, statistically speaking, the one most likely to produce a Republican-dominated legislature.

As evidence of the discriminatory effect, the court found that it was "clear that the drafters got what they intended to get"—a map that made it "more difficult for Democrats, compared to Republicans, to translate their votes into seats." In the 2012 election, Republicans won 48.6 percent of the statewide vote, which gave them 61 percent of the seats in the state's 99-seat assembly, and in the 2014 election, Republicans took 52 percent of the statewide vote and ended up with 64 percent of Wisconsin State Assembly seats. [Put differently, when the Democrats received 51.4 percent of the statewide vote in 2012, they ended up with 39 assembly seats; when the Republicans received around the same percentage (52 percent) in 2014, they ended up with 63 seats—a 24-seat disparity.]

Of course, one would hardly expect perfectly proportional results—52 percent of the overall vote leading to 52 percent of the legislative seats—from any districting map; and "partisan gerrymandering" is, to some degree at least, an inherent feature of any system (like the one that pertains in most states) that puts the legislature in charge of constructing the maps. So the case, in essence, poses the question: How much is too much? And how do we know whether and when it's too much?

Now, I'm not sure I agree with Roeder when he says that the case "hinges on math"; one could imagine the court declining to engage with the "how much is too much?" question on any number of grounds (such as the plaintiff's standing to raise the claim, or the "justiciability" of political gerrymandering claims in general).

But it is certainly true that "how much is too much?" questions often (and sometimes only) can be profitably analyzed with the aid of mathematical tools. If you want to know if a building exceeds the local building-height limit, you pull out a ruler. It's useful to have some way to measure the extent to which the Wisconsin map does, or does not, entrench Republican control by giving Republican votes greater "weight" than Democratic votes.

COMMENTERS PLEASE NOTE: You do not have to remind me that when they are in power, Democrats "do the same thing." I recognize that; that is precisely what makes this case so important. Power will attempt to entrench itself by all possible means, and that is as objectionable when coming from either direction on the political spectrum. This is not a partisan issue; it is one that anyone who cares about democratic processes should care about; if it's not your ox being gored today, it will be tomorrow, I promise you.

The court below used a number of such measures, all of which demonstrated the bias incorporated into the Wisconsin maps: the "mean-median" index, the "partisan bias" measure, and the much-discussed (and terribly-named) "efficiency gap" (EG). The EG measures the number of "wasted votes"; votes that would not have affected the outcome of the election had they not been cast. [For example, all votes for a candidate who received less than a majority are "wasted" in this sense, as are all votes for the winning candidate in excess of the 50 percent+1 needed to secure the election.] All elections will have large numbers of wasted votes; the question, though, is whether the map is skewed in a manner that systematically wastes more Democratic votes than Republican votes (as the court below found that it was).

There are any number of questions one might have about how this phenomenon can be measured, and how particularly egregious violations of nonpartisanship can be identified. But the transcript of the oral argument here makes for rather depressing and disheartening reading. To my eyes, the argument shed less light than usual on the hard questions in the case, and the attitude of several of the justices towards the measurement question ranged, as Roeder suggests, from bemused befuddlement to outright hostility. Justices A. Alito Jr. and Neil M. Gorsuch pressed the challengers on whether any metric could ever serve as a constitutional bright line, and Chief Justice John G. Roberts Jr. was particularly dismissive of what he called—rather oddly—"sociological gobbledygook" in the challengers' arguments:

[If] you're the intelligent man on the street and the Court issues a decision, and let's say the Democrats win, and that person will say: Well, why did the Democrats win And the answer is going to be because EG was greater than 7 percent, where EG is the sigma of party X wasted votes minus the sigma of party Y wasted votes over the sigma of party X votes plus party Y votes. And the intelligent man on the street is going to say that's a bunch of baloney. … And that is going to cause very serious harm to the status and integrity of the decisions of this Court in the eyes of the country.

That strikes me as a bit disingenuous. There are any number of opinions that the court (and the courts) issue that would leave the "intelligent man on the street" scratching his head, because of the presence of what we might call "legal gobbledygook," and if Roberts is suggesting that the court's use of objective mathematical indices of partisan asymmetry would be especially troublesome to the man on the street, I'm not convinced.

I think that perhaps Alito put his finger on what really troubles these metric skeptics: the fear that they will look ridiculous at some point down the road for having chosen a flawed measuring stick:

… gerrymandering is distasteful. But if we are going to impose a standard on the courts, it has to be something that's manageable and it has to be something that's sufficiently concrete so that the public reaction to decisions is not going to be the one that the Chief Justice mentioned, that this three-judge court decided this, that—this way because two of the three were appointed by a Republican president or two of the three were appointed by a Democratic President.

[Over the past 30 years] judges, scholars, legal scholars, political scientists have been looking for a manageable standard. All right. In 2014, a young researcher (Eric McGhee) publishes a paper, in which he says that the leading measures previously, symmetry and responsiveness, are inadequate. But I have discovered the key. I have discovered the Rosetta stone and it's—it is the efficiency gap. And then a year later you bring this suit and you say: There it is, that is the constitutional standard. It's been finally— after 200 years, it's been finally discovered in this paper by a young researcher …

Now, is this the time for us to jump into this? Has there been a great body of scholarship that has tested this efficiency gap? It's full of questions. Mr. [Eric] McGhee's own amicus brief outlines numerous unanswered questions with—with this theory.

It's a legitimate concern, I suppose; as in many areas of the law where courts are presented with non-legal "expert" testimony, they should be wary of jumping too quickly into the fray, choosing one contested side over another given that they generally do not possess the tools with which to evaluate the pros and cons of the testimony presented.

But I do hope the court does not rest on this to abdicate its responsibility to craft some meaningful and manageable measures of partisan interference with the electoral process.

Many years ago, John Ely provided, notably in his book "Democracy and Distrust," what I continue to regard as the most persuasive solution to the fundamental dilemma posed by the institution of (undemocratic) judicial review in a democracy, and the conflicts arising from allowing the most unrepresentative branch of the government the power to overturn actions taken by the more democratic branches. Ely argued, in essence, that the court's appropriate role is that of referee in the electoral arena. Ordinary electoral processes can be relied on to self-correct, without the need for judicial intervention, most attempts by lawmakers to act outside of constitutional boundaries, except in those circumstances where either (a) those actions corrupt the electoral process itself and are, as a consequence, self-sustaining and uncorrectable, or (b) the majority is withholding from the minority the protections it affords to itself. Electoral politics can't correct these problems, which are inherent in the nature of representative democracies, and courts must step in.

The Warren court's "one man-one vote" decisions of the 1960s and 1970s were, in Ely's view, paradigmatic examples within the first category. Judicial interference in the reapportionment cases was justified because systematic bias favoring rural voters in state legislatures would never self-correct, because the legislatures were composed of those who had directly benefited from the bias, and the court had to intervene.

And so, too, in the Gill case; Wisconsin's Democratic voters cannot, through their votes, correct the bias in the Republicans' favor, because the map was drawn precisely to dis-enable them from being able to do that. It will be a sad day indeed if the court turns away from its constitutional obligation to keep the electoral process a fair one because its collective eyes glaze over at the sight of a mathematical symbol or formula.