The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
I Propose a New Term: "Chatbot Lawyer"
Black's Law Dictionary reports that "headnote lawyer" means
A lawyer who relies on the headnotes of judicial opinions rather than taking the time to read the opinions themselves.
"He's a chatbot lawyer" would mean that he's a lawyer who relies on ChatGPT-4 etc. summaries of court opinions (or of legal questions more broadly) rather than taking the time to read the relevant cases, statutes, regulations, and the like. On the other hand, one downside is "chatbot lawyer" is already coming to mean chatbots that are actually acting, or trying to act, or being planned to be acting, as lawyers ….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Hi Eugene - have you followed the lawsuits re: DoNotPay over on Twitter? It's a program that used to claim to be an AI lawyer. Kathryn Tewson is suing the company, and there's another class action against it. It's an interesting read, and Kathryn shows the value a good paralegal can bring to a firm.
... are you intending this to always be a derogatory term, or do you accept and embrace that as technology improves it will become synonymous with "good lawyer with good tools"?
If you're okay with the latter, then full steam ahead.
If you prefer the former, then you need to do more then tie "Chatbot laywer" to the current definicies of the technology, because it will improve. And we're probably only a year or two out from domain-specific products that rely on the tech making it to market, and those domain-specific versions will be much superior (in their domain) then the general-purpose ones currently being tested.
Which is to say, if you want "chatbot lawyer" to always be derogatory, you're going to need to make arguments about why it's derogatory even if a lawyer is better at being a lawyer when they use one.
Is this a "real estate always goes up" sort of thing, or do you have a specific reason you think meaningful improvement is possible? We've had a number of fairly in-depth discussions here lately about how the current challenges are integral to the nature of the technology.
It's a "Moore's law"† sort of thing.
Betting on tomorrow's computers to be more complex, powerful, and so-on then yesterday's computers is one of the safest bets in the book.
When it comes to the current crop of AI‡ technologies? They will get better. They will get better as the feedback loop continues refining the dataset to improve itself. They will get better as tomorrow's computers allow them to up their complexity and get better in ways that aren't really appropriate to this forum.
So simply put, the question is not if "chatbot lawyers" are going to get good enough to be a de facto tool in every lawyer's pocket. The question is when.
________
†Caveat: while Moore's Law originally was just about transistors, to understand it in a modern context you need to take a more holistic system approach. Which is to say, the specific measure that Moore was originally looking at has slowed down from it's once runaway exponential growth. But other measures for complexity (that weren't salient when Moore coined his law) are still going strong.
‡Hate the term. They chose it 'cause it's what people actually know (from sci-fi) and so they make assumptions (based on what they know from sci-fi), but it's not a great fit. But it's the term people are using, so alas.
Might as well ask a parrot to review and summarize court opinions--your answer will be equally useful but a lot more cute in its delivery. And, the parrot won't ask for login and demographic details or sell your email address; it literally works for crackers.
No. A parrot would likely be better.
Are you familiar with Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008)
Yes, I am familiar with the case of Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008). This is a case that was heard in the United States Court of Appeals for the Seventh Circuit and involves a dispute between Citadel Group Ltd., a software vendor, and Washington Regional Medical Center, a hospital.
The issue in the case centered around a software licensing agreement between Citadel and Washington Regional. Citadel had provided Washington Regional with certain software programs and licenses, but the hospital allegedly continued to use the software after the licensing agreement had expired. Citadel sued Washington Regional for copyright infringement, and the hospital counterclaimed, arguing that Citadel had engaged in antitrust violations.
The case ultimately went to trial, and a jury found in favor of Citadel on the copyright infringement claim and in favor of Washington Regional on the antitrust claim. The Seventh Circuit Court of Appeals affirmed the jury's verdict, finding that there was sufficient evidence to support the jury's findings on both claims.
The case is notable for its discussion of the intersection between copyright law and antitrust law, as well as its analysis of the issues surrounding software licensing agreements.
Note: This is not correct.
How confident are you that Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008) centered around a software licensing agreement?
Note: the case is about construction a medical office building.
I continued to ask questions. ChatGPT continued to be very confident.
Eventually, I told it this:
https://casetext.com/case/citadel-group-ltd-v-washington-regional-medical-ctr-3
It still told me this was about software and so on.
A parrot would have done a much better job.
This isn’t new. I was taught early on, maybe in law school, or by senior partners in one of the first litigations I was involved in, to read every case cited by opposing attorneys, because I would find that many of their cites were wrong. They had depended on digests and the like for their cites, and hadn’t actually read the cases. And, at least back then, many of those doing the grunt work of reading cases for these secondary sources were fresh out of law school.
I can easily see "Chatbot Brief" and "Chatbot Argument" starting to get thrown around derisively. I would imagine a more imaginative term will arise to refer to attorneys that (re)produce them or are capable of uninspired work of the sort that a chatbot might produce.
If today's chatbots can score in the 80th percentile on the LSAT and bar exam, then they're already better than the average lawyer, so chatbot lawyer may become a compliment.
What I'd like to see is some discussion on how the legal industry will adapt to this new competition. Will they use industry capture of regulatory control to try to keep the robots away? Will it become a physician assistant -MD dynamic, where the physician assistant does 90% of the work, but the MD is still required to bless off legally?
If you gave the average law student effectively unlimited time and full access to the internet during their exam, I'm guessing their LSAT and Bar exam scores might go up pretty dramatically as well.
True, but also irrelevant to anyone looking to do lawyer's job fast, cheap, and demonstratively high quality by the best quantifiable means available.
"If today’s chatbots can score in the 80th percentile on the LSAT and bar exam, then they’re already better than the average lawyer"
ChatGPT did well on the LSAT but so far has only managed to pass a portion of the Multi-State Bar Exam, which is multiple choice and does not itself confer a license to practice law. Computers have always been good at LSAT problems in substance - the difference with ChatGPT is that it can interpret the questions as written in natural language.
Chatbots are getting better and the conversations around them are important, but there's no need to dramatise the current state of things.
Bar exams primarily measure and reward "issue spotting": mining the stated facts for legal issues -- no matter how esoteric and of little practical import -- and mechanically grinding out the application of the law to those facts. That's a fairly minuscule part of real-world practice, and in that context career issue spotters can actually be counterproductive (and, quite frankly, annoying to work with) because they're often incapable of stepping back from hyperexamining the texture of the bark on the trees to appreciate the big picture of the forest. Truly effective advocates appreciate that much of legal practice has less to do with the law per se and more to do with strategic calls, risk-taking, and understanding how a poor optical position can weaken or even nullify a correct legal one.
If and when ChatGPT can actually engage in that sort of meaningful higher-order thinking, human lawyers may start to have cause for concern. Until then, it's just another tool in the bucket to streamline the actual practice of law.
All well and good, until these our new overlords begin to issue binding opinions. Which is not as far-fetched as it might sound. It's essentially a window on the semantic web, which is populated with (scattered among the pornography and horoscopes) actual data-points. And what court wouldn't want a cost-free and completely impartial special master on a difficult question?
So, assuming reproducibility and open-source authoring, I could see this being admissible for:
- corpus-type textual interpretation of statutes
- fact-based questions where the dataset is very large
- assessment of reasonable expectations of privacy in a certain area, based on the discourse
Essentially, any area where (1) the dataset is complex or numerous, (2) it is reasonable that an automated system could parse a discrete question accurately, and (3) both parties agree to cede the question, at least presumptively, in the mannner of a magistrate or special master, to the binary coin-flip.
Plus, and not least, it's instant.
Mr. D.(igital)
I asked ChatGPT4, of course.
"A suitable nickname for a lawyer who relies on AI assistance like me, instead of relying on their own knowledge and expertise, could be "AI Advocate" or "CyberCounsel." These names emphasize their reliance on technology and artificial intelligence in their legal practice."
Colour me unimpressed
A silicon shyster? (Though the term "shyster" might be regarded as inappropriate)
Chatty Cathy, Esq. (Read California, a Country Lawyer)
I prefer Android Attorney