The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

Maybe the Status Quo Isn't So Bad? Thoughts on Two Proposals about Law Review Reform from the AALS

Sorting at scale is always going to be hard.

|

Brian Galle, writing on behalf of an Advisory Committee on Law Journal Reform of the AALS Section on Scholarship, has posted two proposals to reform the law review submission process.  The proposals are very interesting, but I'm skeptical they would work.  On the whole, I suspect they would do more harm than good.  In this post, I will explain why.

Let me start with an overview of the proposals. According to the recently-posted discussion draft, the main problem with the law review submission process is that its scale makes for poorly-informed decisions.  In the current system, there are hundreds of general-interest law reviews and thousands of law review authors. This means that a single journal can receive several thousand submissions.  Editors can't possibly make informed decisions about them in the time-crunch of submission season.  And in the crunch of decision-making time, journals typically receive hundreds of expedite requests requiring them to make very quick decisions about whether to accept articles.  They can't make decisions in an informed way, so they rely too often on proxies and just can't make the thoughtful evaluations of scholarship that would be best.

The authors offer two proposals for reform.

The first proposal is to mandate limits on submissions and acceptances.  Authors would be forced to limit how many submissions they make at any one time (say 10 or 20), and they would be required to accept the first offer they receive.  Also, journals would be forbidden from making offers in a short period of time after submission (say, a month).  The result, according to the idea's proponents, would be a more orderly system of submissions and acceptances.  The rules to implement this proposal would be promulgated by a selected committee of law professors called the "Selection Committee."  The Selection Committee would police violations of its rules and could punish violators with sanctions, such as temporarily forbidding access to the submission system.

The second proposal is a matching system. Every author would choose a set of journals in which they would be willing to publish.  Every journal would go through all the submissions and decide which ones they are willing to publish. A computer would then match article to journal based on the mutually expressed preferences.  The results would be binding on both authors and journals.  The matching system would be run by the Selection Committee, as above.

I appreciate the group's very thoughtful engagement with hard design questions.  But I'm skeptical that either of these proposals would be an improvement.  My initial reaction is that both proposals would just make the system worse.

Here's my thinking.

Let me start with what I take to be the big problem here.  It's a problem that the proposal hints at, but as far as I can tell doesn't actually identify: The prominence of the general-interest law review.  For quirky historical reasons, most of the major law reviews are general.  They will consider any article that remotely touches on anything relating to law.  An article could cover any field of law, whether it is constitutional law, comparative law, tax law, jurisprudence or any other legal field.  An article could be theoretical, doctrinal, or empirical.  And the connection to law can be (and often is) modest.  An article could really be about history, or economics, or psychology, or sociology, or any other field as it touches on some aspect of law.  Everything under the sun can be considered by any general-interest law review.

As I see it, the general-interest law review is the real source of the problem the new proposal identifies.  Several thousand law-related submissions are created every year. If you're in charge of a general-interest law review, you're open to considering every single one of those several thousand submissions.  And if most of the major journals are general-interest law reviews, that means that most of the journals are open to considering every single one of those submissions.  And because all of those general-interest law reviews are offering essentially the same service, they compete for author interest based primarily on prestige.  In a world in which submitting an article is relatively inexpensive, authors have an incentive to submit widely to find the most prestigious journal that will accept their papers.

In short, the prominence of the general-interest law review makes the scale of the problem inherently unmanageable.  From the journal side, we have hundreds of journals competing for the best articles they can get from thousands of submissions on every legal topic under the sun.  And from the author side, we have thousands of authors competing for the best placement they can from hundreds of journals.  If you want an orderly system in which decisions are made slowly and deliberately, figuring out a way to match up that many articles with that many spots in journals is an incredible challenge. (Professors often complain that student editors don't do a good job selecting papers, but I doubt professors could do much better faced with these sorts of numbers.)

If that's the problem, what's the answer?

One obvious answer would be to simply abolish general-interest law reviews.  We could instead have a siloed system like we normally have in other academic fields.  Each journal would have a specific subject matter or methodology.  For example, instead of the Harvard Law Review, you could have the Harvard-based Journal of Constitutional Theory.  Instead of the Yale Law Journal, you could have the Yale-based Jurisprudence Review.  Instead of the Stanford Law Review, you could have the Stanford-located Papers in Intellectual Property Law.  Instead of the Columbia Law Review, you could have a Columbia-based Business Law Review.  (Of course, there are many subject-based law reviews today, including, as it turns out, a Columbia Business Law Review. But my sense is that more of the placement angst surrounds the general-interest law reviews.)

If scale is the real problem, that strikes me as the real way to solve it.  If every journal picked a subject area or methodology, each would only consider a limited subset of articles.  This narrowing would mean that authors would have only a handful of journals that would even consider their papers.  Authors would submit to only those journals, and would accept at the best of them or not publish the article if they receive no offers.  Each journal would get a limited number of submissions simply because they would not consider submissions outside their narrow field.  The scale problem would be solved.

Am I actually recommending that general-interest law reviews should be abolished?  No. Although this would solve the scale problem, I think it would also eliminate some of the major strengths of the status quo.  For example, the prominence of the general-interest law review creates an incentive to write in a more accesssible way.  General-interest law reviews also circumvent the gate-keeping function of subject matter silos that I suspect would block new ideas from entering the academy.  So I don't actually want them gone. My point is just that the scale problem the discussion draft tries to solve is unavoidable when you have so many general-interest journals being open to publishing so many submissions.  The discussion draft doesn't question the prominence of general-interest law reviews, so it necessarily offers proposals to deal with the crazy scale of the matching problem rather than to change it.

Let's turn to the two specific proposals.  Would they make the probem better or worse?  I suspect they would make the problem worse.  Here's why.

The first proposal, limiting submissions and acceptances, strikes me as an overly heavy-handed way to limit choice.   Limiting the number of submissions will, as a practical matter, force authors to eliminate the journals that they think are unlikely to accept their articles.  I worry that authors would limit themselves by foregoing the opportunities of getting more prestigious placements, which will have the effect of deepening existing hierarchies.  The lost opportunity for authors would be matched by the lost opportunity for journals, preventing the matching that is the current system's main strength,  And requiring journals to wait a month before accepting an article will draw out the process for too long.  Maybe I am too much of a libertarian.  But I think it's better both for authors and for journals to have a greater set of choices.

The second proposal, a complete ranking system, strikes me as impractical.  If it could work, it sounds like a good answer:  Everyone's preference is optimized.  But how is any journal supposed to rank several thousand submissions?  And how is every journal supposed to do that?  It's like making a committee of law students grade thousands of 60-page papers.  And it's worse than that: Journals not only would have to grade the papers, but they have to put each of them in exact order of preference (up to some quality standard they need to identify, if they can do so in the abstract).  I've been on committees of professors where we tried to rank a handful of law review articles in one field, and it was highly contested and difficult.  How can we expect law students to do that for hundreds or even thousands of articles from every field at once?  If I am understanding the proposal correctly, it doesn't seem like something that editors can do.

If I had to pick one of the three options -- the status quo, the first proposal, or the second proposal -- I would favor the status quo.   The current selection process is messy and imperfect.  But it also has considerable strengths that are too-often overlooked.  Editors can look through submissions for diamonds in the rough.  The expedite process can focus journals on a subset of admissions that have been judged by peers as worthy of consideration.  Granted, it's understandably frustrating for editors when a higher-ranked journal picks off an article they found.  But it's a frustration that some amount of market sorting is going to produce (and my sense is that editors frustrated by losing to higher-ranked competitors have less of a problem with picking off articles from journals ranked below them).  Finally, because journals are competing primarily on prestige, the stakes of journals measuring quality incorrectly are relatively low.

As I said above, the status quo is imperfect.  But I see a lot of that imperfection as the fault of the general-interest law review.  In a world with hundreds of general-interest law reviews and thousands of eligible articles, the matching process is destined to be difficult. If we're not going to abolish general-interest law reviews, we need to pick which set of serious problems with article selection are less troublesome than others.  And my own sense is that the status quo is likely less troublesome than the particular alternatives discussed by the AALS Advisory Committee.