The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Crime

An emerging scholarly consensus on mismatch and affirmative action (ideologues not welcome)

|

The effectiveness of affirmative action has never been the focus of the plaintiffs in Fisher v. University of Texas, but after Justice Antonin Scalia's comments at Wednesday's oral argument, it is likely to be a significant part of the debate leading up to the Court's decision next spring. It is thus a good time to revisit the mismatch debate, and I will be writing several posts about it over the coming days.

The "mismatch hypothesis" contends that any person (certainly not just minorities) can be adversely affected if she attends a school where her level of academic preparation is substantially lower than that of her typical classmate. The idea was advanced in the mid-1960s, not in the context of affirmative action; it has been a subject of empirical research for about 20 years, with a sharp uptick in the sophistication of that research just in the last five.

The growing interest in the mismatch question, and the ideological controversy it usually prompts, prompted a striking effort by the Journal of Economic Literature (JEL) to determine whether the existing literature pointed to some consensus among economists. This effort led to a new article that is my focus in today's post.

JEL is one of the flagship journals published by the American Economics Association; it generally publishes articles that try to synthesize knowledge in a field, rather than those with new results. Two years ago, JEL's editors decided to commission an article on the "mismatch" (or "peer effects") debate. Recognizing that this was an unusually controversial issue, the article was to be written by two economists with differing starting positions: Peter Arcidiacono, a Duke economist who has published several important studies on mismatch, and Mike Lovenheim, a Cornell economist who was skeptical about mismatch. When the authors completed a draft, JEL sent it to seven diverse peer reviewers - an unusually large number - to ensure the draft was critically examined. All seven recommended publication. JEL has a queue, but the article will probably appear in the next issue.

Given this process, it should not be surprising that the resulting article - "Affirmative Action and the Quality-Fit Tradeoff" - does not take thundering positions on any of the outstanding issues. Indeed, it finds that on many of the most important questions raised by the mismatch hypothesis, the available data is too scattered and too poor in quality to reach clear conclusions. Moreover, since the authors find there are "positive average effects of college quality" on a host of outcomes, any mismatch effect has to be large enough to outweigh these advantages. Nonetheless, the authors find persuasive evidence that such mismatch effects occur, particularly in law school and in science education.

Many readers will recall that in 2005, I published an analysis of law school affirmative action that concluded that African American bar passage rates were seriously depressed by the use of very large preferences in law schools. A host of critics immediately descended, generally conceding that my numbers were correct but arguing that my work was theoretically incoherent and that more sophisticated analyses disproved the existence of mismatch. My observation was that most law professors found the literature bewildering, and either felt neutral about mismatch or took whichever side matched their prior beliefs.

Anyone who wondered then just what to make of the debate will benefit from reading pages 14-31 of the JEL article, which deals entirely with the question of law school mismatch. The authors completely dispose of the claim that my work was conceptually flawed; they develop three models of how mismatch might work and show that my work neatly fits within these frameworks. They also show that none of the critics landed more than a glancing blow on the mismatch hypothesis. In the end, they conclude that the evidence that mismatch substantially lowers minority bar passage rates is "fairly convincing," and they strongly endorse the quest for better data - in particular, the anonymized data I am still trying to pry from the State Bar of California.

The other wide-ranging examination of law school mismatch is Doug Williams's study, "Do Racial Preferences Affect Minority Learning in Law Schools?", which appeared in the Journal of Empirical Legal Studies in the summer of 2013. Using the very imperfect data from the Bar Passage Study (BPS), Williams developed a series of empirical tests that built not only on my work but also - indeed, even more so - on tests developed by critics of mismatch. Williams made several improvements in existing techniques, partially overcoming problems in the BPS by making adjustments for such factors as the jurisdictions where students took the bar exam and, if they failed, how many times they took it.

Williams's paper presents equations testing dozens of different combinations of models and outcomes. With impressive consistency, his analysis shows very powerful evidence for law school mismatch, especially for first-time takers. His results are all the more compelling because, as Arcidiacono and Lovenheim point out, the weaknesses of the BPS data bias all analyses against a finding of mismatch. Williams concludes his piece, too, with a plea for the release of better data.

Meanwhile, not a single one of the law school mismatch critics has managed to publish their results in a peer-reviewed journal, though at least some of them have tried. As I will discuss in another post, many of these critics still shrilly hold to their earlier views. But it should be clear now to any reasonable observer that mismatch is a serious issue that the legal academy needs to address.