The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Does the Quality of Brief Writing Affect the Outcome of Supreme Court Decisions?
An interesting empirical study looks at whether better briefs lead to better outcomes.
Does writing a better brief increase an advocate's odds of prevailing before the Supreme Court? A new empirical study suggests that the quality of the writing (as opposed to oral argument, the substance of the arguments, and the overall quality of representation) may not matter as much as some might have thought.
Adam Feldman of EmpiricalSCOTUS and Professor Pamela Corley, a political scientist at SMU, have a new paper in The Journal of Appellate Advocacy and Practice, "Does Quality Matter? The Influence of Party Briefs and Oral Arguments on the U.S. Supreme Court," looking at whether the quality of brief writing, as measured by BriefCatch, appears to affect the likelihood a party prevailed before the Supreme Court. The study builds on prior research showing some correlation between party success and how Justice Blackmun rated the quality of the oral argument. (Justice Blackmun kept copious notes about such things.)
Here, from the study, is a summary of their conclusions:
This article applies tools from a piece of software called BriefCatch to provide writing quality scores to the same set of cases analyzed in Johnson et al.'s 2006 article. In doing so we examine the comparative role of briefs and oral argument quality in Supreme Court decision making. While BriefCatch grades are not a perfect companion to Justice Blackmun's grades for oral arguments, especially because they are calculated exogenously from the justices, as opposed to Blackmun's grades, it provides us a measure for brief quality and in doing so allows us to extend the study of the mechanisms affecting Supreme Court decision making beyond what was previously possible. In addition to measuring the writing quality of briefs, we also include another measure of brief quality—the number of Supreme Court precedents cited—in order to capture the legal authority relied on in the brief.
We find that, after controlling for elite attorneys and the quality of oral argument, a higher BriefCatch grade is not associated with the final vote on the merits; however, there is an association between how well-grounded the brief is in precedent and the final vote on the merits. Furthermore, our study provides continued support for Johnson et al.'s finding that the probability of a justice voting for a litigant increases dramatically if that litigant's lawyer presents better oral arguments than does the competing counsel, a result that holds even after controlling for the quality of the brief. These results are important for three reasons. First, given that the workings of the Court are often shrouded in mystery and the Court was designed as the primary body of the federal government with responsibility to interpret the Constitution, it is important to understand the different components of its decision-making process. Second, the findings inform our understanding of judicial behavior by helping us better gauge the importance of briefs and oral arguments in the decision-making process. The fact that judicial decisions are associated with quality lawyering before the Court suggests the value of looking beyond ideology and strategy to explain Supreme Court decision-making. By showing an association between winning and quality lawyering, we offer practical guidance to practitioners. Our findings suggest important implications for the role of persuasion in politics more generally. For example, recent research suggests that political persuasion in social media is most likely to occur when people are presented with well-reasoned arguments. Thus, it is important to understand whether quality argumentation matters, both orally and in writing.
The study's method necessarily has some limitations, but it is quite interesting nonetheless.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
This is definitely proof of the futility of brief writing rather than the vacuity of BriefCatch grades.
RE: "...whether better briefs lead to better outcomes."
Ask the Underpants Gnomes.
I learned from Defending Your Life that you want an attorney who uses a very large percentage of their brain.
I could say I use 15% but then that would be boasting.
Could the causality be backwards, where winning cases are more likely to have good oral arguments?
I've always felt like a lot of the bad oral arguments came from people arguing loser cases. Either because they knew it was a loser so their heart wasn't in it, or a better lawyer would've known better than to take the case, or the case is some screwball case with a screwball attorney chosen by a screwball client.
Said client being Texas, usually.
You believe a software program’s ratings of Supreme Court briefs? You take its output at face value? Treat it as a gold standard of truth? No evidence the software’s program has any more relevance or value than rolling a die. But it’s like, software, and software is way cool shit, so you just accept it?
The fact that the program’s ratings don’t correlate much with outcome could be interpreted as evidence the program’s ratings are worthless. Indeed, one could DEFINE brief quality by influence on the outcome. Influence = high quality. No influence = low quality.
But accepting the program’s ratings as true just because it’s software? l suppose it’s like tarot cards. Tarot cards were way cool shit in the Middle Ages. The latest in miniature illuminated manuscripts. Super advanced technology. So what they say must be right! You wouldn’t want people to think you were, like, not with it, would you?
Hmm. What does this software measure? A quick look at their website indicates they have five writing scores: Reader Engagement, Concise and Readable, Flowing and Cohesive, Crisp and Punchy, and Clear and Direct. It seems to give advice like "start more sentences with short words", avoid passive voice, etc.
Now, readability is nice and all. But these are Supreme Court justices; they can handle writing above an 8th grade level. And I don't think this software can really measure whether an analogy hits home, whether opposing points were addressed, whether unnecessary arguments were avoided, or whether a citation was actually on point. You probably don't want a really horrible score, but maximizing your score probably isn't a goal worth pursuing.
Tarot cards also have an impressive-looking list of things they claim to evaluate.
This is good.
Ideally, the outcome of a case should not depend on the quality of legal presentation as much as how the law should apply to the facts. Lack of people being able to persuade through better writing quality suggests that judges are being more objective than some assume. A beautifully written brief advocating an incorrect legal result should be rejected.
The results regarding oral argument is interesting. But, I wonder how quality is assessed here. Oral argument happens after a significant amount of thinking and tentative deciding has already been done by the justices hearing the argument. An advocate who is going to lose is likely to see more skepticism from the judges. As everyone knows, once someone has thought through an issue to arrive at a tentative conclusion, it may be a monumental or impossible task to convince them to turn the ship around. And, to the extent that the law objectively points in one direction, we don’t want an advocate to be able to achieve a different result merely by skill.
I am not sure that the “quality” of oral argument can escape these dynamics. That is, it may not be the “quality” of argument that actually matters, but what we are picking up on is that everyone sees the writing on the wall.
To the extent that quality does matter, that is disappointing, not reassuring. Because it suggests that the lawyers are what matter when what should matter more is the law.
That doesn’t mean that quality shouldn’t matter at all. Judges sometimes need help seeing how the logic of the law should resolve a particular fact pattern. But especially when you have multiple judges, it is the logic of the law rather than the skill of advocates thar should ideally be the dominant factor.
That doesn’t mean the lawyer quality shouldn’t matter. Life is complicated and sometimes, in close cases, it is hard to see how the logic of the law should drive the case. In these instances, good advocacy will be most important. And if the stakes are high, it may make sense to find and pay for the best lawyer possible. But still, in most instances, even the best lawyer shouldn’t be able to turn a losing case around. Ideally even an average lawyer should be able to point out the correct law and logic that should resolve the case. And it they don’t, the judge or justices will hopefully make up for any deficiencies. That doesn’t always happen, of course. But that is how the system should lean and that is my sense of how the system does lean in practice.
"Ideally, the outcome of a case should not depend on the quality of legal presentation as much as how the law should apply to the facts."
"That doesn't mean that quality shouldn't matter at all. Judges sometimes need help seeing how the logic of the law should resolve a particular fact pattern."
Thank you.
As to oral argument, how often does it change the result? My impression, from talking to appellate judges at dinners (I don’t know any personally) is that at most it's 5% of the time.
As to brief writing, allowing a robot to determine what is good or not is bizarre. I’m also amazed that lawyers use a robot at all — can’t they tell any more when they’re making good arguments and presenting them well?
Just from the passing description of Briefcatch, I’m suspicious. It seems to give weight to citations to legal precedent. I thought string cites were supposed to be bad! And even when you discuss the cases, you should not go on and on with a treatise-like exposition of a point of law of like 25 cases.
The only meaningful measure of the "quality" of a legal brief is if it's effective or not. Not how "good" the writing is based on one person's or team's subjective opinion of what "good" writing is.
On oral arguments, isn't it a bit of a self-fulfilling prophecy? I found this argument persuasive and well-done, so I'm more likely to vote for it. If I think the arguments are poor enough not to get my support, I'm not that likely to find the argument well done.
The difference, I think, is mostly that you're getting the judge's opinion of the argument while getting a purportedly objective view of the brief writing. What you need to do is have a rating system for brief writing and a rating system for argument that can be judged by neutral criteria by someone not knowing who won the ultimate case. THEN you look at the result and can have a comparison. But if one measurement is just asking the judges "which did better," then of course that's going to favor the side the judges ultimately voted for.