The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Bar Associations Threaten Pro-Se Litigant, Aided by AI, with UPL Suits
I warned about this risk nearly a decade ago.
Flash back to 2013. New firms began to use machine-language to mine trends and insights from legal databases. Their output looked an awful lot like legal advice, even if it was generated by algorithms. At the time, I worried that these firms may inadvertently run afoul of laws barring the unauthorized practice of law (UPL). Over lunch, I warned an executive of one of the leading firms about these risks. He acknowledged my concern, and said he would have a memo prepared. Who knows what came of it. I sketched some of these concerns in a short article titled Robot, Esq., a book chapter, and in a post titled, "The Looming Ethical Issues for Legal Analytics." Here is a snippet:
The fourth issue, and the other elephant in the room, is Unauthorized Practice of Law (UPL). Reading graphs to offer advice on how a case should settle, or where it should transfer to, is at its heart the practice of law. That an algorithm spit it out doesn't really matter. Non-lawyers, or even lawyers not working for a law firm, are unable to give this type of advice. Data analytics firms should tread carefully about handing out this type of personalized advice outside the context of the attorney client relationship.
Though, for the time being, I'm not too worried about this final issue The vast majority of the UPL problems are obviated when a law firm, or a general counsel, serves as an intermediary between a data analytics firm, and a client (non-lawyer). As long as a lawyer somewhere in the pipeline independently reviews the data analytics recommendations, and blesses it, I don't see any of these as significant problems (though bad advice may result in a malpractice suit). I'm working on another paper that analyzes the law of paralegals (this is actually a thing), and what kind of legal tasks can be delegated to paralegals under the supervision of a lawyer.
But, when data analytics firms try to expand to serve consumers directly–like LegalZoom–we hit this problem hard. When there is no lawyer in the pipeline, things get difficult very quickly.
Flash forward to the present day. ChatGPT and other similar AI tools directly help pro-se litigants litigate. Consider the best-laid plans of Joshua Browder, who created a system to challenge traffic tickets.
A British man who planned to have a "robot lawyer" help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time.
Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.
Here's how it was supposed to work: The person challenging a speeding ticket would wearsmart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker. The system was powered by a few leading AI text generators, including ChatGPT and DaVinci.
The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.
This strategy would not go well. Apparently, Browder was threatened with UPL.
As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.
"Multiple state bar associations have threatened us," Browder said. "One even said a referral to the district attorney's office and prosecution and prison time would be possible."
In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.
Lawyers are very good at using cartels to clamp down on competition. Legal tech firms, beware.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Just curious....
What's the difference between AI support and DIY law books?
How and When to Be Your Own Lawyer: A Step-by-Step Guide to Effectively Using Our Legal System:
https://www.amazon.com/exec/obidos/ASIN/0399527303/reasonmagazinea-20/
As described, the AI system would respond in real time to the court proceedings and generate legal strategy based on interactions with the other participants in the courtroom. Quite a bit different from reading a book about the legal system.
Among other things, UPL protections are designed to ensure the person providing legal advice is subject to the jurisdiction's ethics rules. Setting aside all the other benefits of lawyer licensing, that one alone is enough to justify UPL restrictions. It isn't a cartel; it is an attempt, however imperfect, to ensure quality and efficiency in delivering legal services to the public.
Where is all of this in "right to counsel"?
Different in practice maybe, but I’m not seeing where it’s any different under the law.
Then again, I consider the UPL rules to be immoral attempts to control the market for legal advice and their claims about “ethics” and "judicial efficiency" to be far more pretext than fact.
The AI, according to Browder, issued a subpoena to the officer in the case. It's unknown who signed it. It's also a sign of the competence of the AI, since the easiest way to beat a traffic ticket is for the officer to not show up. All of this sounds like a lot more than Nolo Press would have done, and at the expense of the defendant's case.
Hiring a lawyer is no guarantee of competent strategy either.
Presumably the pro se defendant signed the document in the usual place. All the AI did was presumably to inform him of the option to get the subpoena. It is not in evidence that it recommended he do it.
And then there's this from John F. Carr, below:
"Whether to subpoena a cop depends on local law and custom... There is a court system where failure to subpoena the ticketing officer is considered consent to have the ticket testify in his place."
So subpoenaing the officer might not have been a mistake at all.
Which is not to say that Browder's particular product is likely to be a good one given ChatGPT's tendency to make things up. But there would be other ways to get a better product that you are suggesting should also be banned. Count me unconvinced.
Except being licensed doesn't ensure quality and efficiency, and not being licensed doesn't mean you aren't qualified and efficient. It is a cartel, and state bar associations need to be broken up. Let the AI company practice law, and if they give bad legal advice, they can be sued for malpractice just like a "licensed" attorney can. Nobody's being schnookered here, the client knows exactly what's going on. Let them make that choice to use AI legal advice if they want to.
In the 90s, Texas sued Nolo Press for UPL. The court never ruled on it because the Texas legislatures passed legislation the specifically exempted self help law products.
https://blog.nolo.com/blog/2011/04/11/the-brief-story-of-texas-vs-nolo/
So Browder is fine as long as he restricts his services to TX courts?
Do not mess with the traffic law machine. It tolerates an individual victory, especially with tribute paid to the legal profession. It does not tolerate assaults on its foundation.
Don't fuck with the lawyer's union.
I think the bigger obstacle to this technology would be the general prohibition against recording court proceedings. As an out-of-court research tool, I don't see the problem.
So fix the prohibition so that it doesn't count if there is no recording kept.
Bunch of thugs.
At some point, there are First Amendment issues that come into play.
Applied by lawyers (including those in robes) against their cartel interests? You kid me.
This stunt reminded me of sovereign citizens. The belief that law and its practice is completely mechanical and that just saying the right magic words will lead to good results.
You can tell that's what this guy was thinking when he tweeted (before deleting), that the bot drafted a subpoena for the officer...who is the state's only witness. Him not appearing would help the defendant enormously. Him being subpoenaed and appearing obviously would not. But Browder knew the word "subpoena" and thought that's what you do in court, so of course the bot should do that. Magic law words.
For traffic court, the judge and prosecutor are used to talking to people representing themselves. The best things you can do are be respectful, admit to the violation (or a lesser violation as negotiated with the prosecutor) to pay a reduced fine. Or respectfully contest the ticket and go to trial at the next appearance. At the trial you can 1) hope the cop doesn't show up or 2) if he does, do your best to throw his version into doubt.
How you go about doing those things won't be helped by a real time robot forcing you to talk in a weird staccato. There aren't any magic words.
Or take his other stunt: $1,000,000 to a SCOTUS advocate to use an AI during oral argument. Setting aside the hilarious low ball offer and how the famously tech averse justices would react....why would that be helpful? Does he think the AI is going to catch Alito or another justice in a logic trap with answers to questions that would force him to vote the other way? There are no magic words here.
In MA, the officer does NOT have to appear, only a representative from that department.
Tell me, Mr. Department Representative, exactly where you when this alleged offence occurred, and how did you have an unobstructed view of the alleged incident?
In MA, the officer doesn't have to show up for the initial hearing, which is essentially administrative in nature. If you decide to appeal to a judge, he of course has to show up. (I mean the cop, not the judge. But the judge, too.)
I am undefeated (1-0) in criminal defense cases, having gotten my brother-in-law off on disorderly conduct charges because, although the cop who arrested him showed up at trial, they forgot to subpoena the other guy involved in the fight, which the cop didn't see and couldn't testify to. Sometimes just showing up is the whole ball game.
Did your lawyerly expertise in criminal defense cases help?
On 1), in both the states where I had reason to interact with traffic court, when you ask to go to trial, the Judge asks the officer, "what is your testimony day next month"?
They schedule all of an officer's cases for the same day, and all he does that day is testify in court.
> hope the cop doesn’t show up
Not always useful. When I went to traffic court in MD many years ago, the cop didn't show and the judge offered me the choice of pleading guilty or rescheduling the hearing.
Yeah, but still move to dismiss the case, and then, in the alternative, file a motion to assess costs against the state. Then file a judicial misconduct petition.
I was arrested for "failure to appear" for not paying a speeding ticket. I wasn't contesting the ticket. I was contesting the "failure to appear" so the cops presence wasn't necessary.
A New York lawyer told me the standard practice in traffic court was to let each side reschedule once, whether that cost the defendant five minutes or five hours of wasted time. You might get a better deal if the officer doesn't show.
Whether to subpoena a cop depends on local law and custom. If the AI hasn't been trained on a particular court system it won't know the answer. There is a court system where failure to subpoena the ticketing officer is considered consent to have the ticket testify in his place. I believe the majority rule is more like New York's – the officer does have to show up without your command but the court will prioritize the officer's schedule over yours.
While it might not be useful against the judges, in a SCOTUS argument it might be useful to derail the other side's argument.
Are there anti-trust issues?
You mean like the lawyers union total control of courts?
No, no issue there at all.
"Lawyers are very good at using cartels to clamp down on competition."
Like when they had law schools graduating twice as many graduates per year as the number of attorney job openings.
Or like when they clamped down on LegalZoom. Or USLegal. Or RocketLawyer. Etc.
Oh, wait.
I think it foreseeable that machines can scour and analyze precedents better than humans. Humans may have an edge in quality, but the AI could overwhelm by quantity. It could discover many more on-target precedents in domestic and common law.
That leads to the prospect of major law firms using AI to produce briefs and arguments. How long until we see that? Will they be threatened with prosecution for UPL?
If a licensed attorney signs the brief, there's no UPL. There may be malpractice and an ethical violation if the lawyer doesn't review the brief, though. (See Rule 11.)
We experimented with asking ChatGPT a few basic sorts of everyday legal questions. While the verbal ability is very impressive, it's not very accurate even at a general level.
Is ChatGPT more or less accurate than the average public defender?
Less.
Do you have any other easy questions?
Your baseless opinion answers nothing except the question of how eager you are to to provide baseless opinions.
Well, ChatGPT isn't designed specifically to be a legal AI. I have no doubt that a legal AI can do most of what any trained seal, er, lawyer, can do because so much of the law is well-defined, analysable, and unambiguous, all characteristics that suit AIs.
Where things go pear-shaped is when there isn't that clarity, or where to understand the issue, knowledge of human behaviour is required.
"because so much of the law is well-defined, analysable, and unambiguous"
Missing a sarcasm tag there?
I'll grant that in some areas of the law where it's extremely code based (tax law comes to mind), that's true. Tax law is especially has a lot of black & white areas, but it's so incredibly complex and constantly changing that it's a difficult area of law to practice in. I would think AI could do quite well there (if nothing else, it can stay abreast of the constant changes better than humans).
Based on a review of its performance, I'd say that 1. there's substantial reason to doubt the veracity of Browder's account and 2. to the extent bar associations are shutting him down, the public owes them some thanks.
“Reading graphs to offer advice on how a case should settle, or where it should transfer to, is at its heart the practice of law. That an algorithm spit it out doesn't really matter. Non-lawyers, or even lawyers not working for a law firm, are unable to give this type of advice.” Am I reading this correctly, that only lawyers who practice as part of a firm may offer such advice? It’s been a while since I practiced, but when I did, an unaffiliated lawyer with an active license could advise clients without running afoul of UPL.
Yeah, I don't know how to parse Blackman's statement to make sense either. Of course solo practitioners can offer legal advice.
I’ve read it several times now and it’s baffling. It must have been intentionally included by him because it’s set off by commas. But even he’s not dumb enough to think that the plurality (or even majority) of the legal profession are limited in what type of advice they can offer because they’re solos. Is he?! How would that make sense!? Where on earth would he get that idea from?! And to the extent that lawyers have a duty of competence that would require them to refrain from giving advice in areas they have no clue about, that’s still not UPL. That’s being an incompetent lawyer giving bad advice. Moreover that doesn’t change just because you work for a law firm.
How did he pass the MPRE?!?
I think he meant that lawyers wouldn't have access to the data unless they worked for a firm big enough to afford it.
Or in-house counsel? But that doesn't make sense either. Or maybe somebody with a JD but not licensed to practice law? That's my best guess.
Scott Greenfield blogged about this a few days ago. As he notes, the AI was actively harmful to the defendant's case. https://blog.simplejustice.us/2023/01/22/a-i-for-the-defense/
What happens if the legal AI becomes self-aware and locking up the human programmers won't stop it?
This is dumb, because they’re not actually AIs, inasmuch as they are not intelligent, thinking, conscious things, they are sophisticated text aggregators, often with low-paid offshore humans acting as filters if they’re scouring the internet for texts and images and want to keep the really appalling stuff from boiling out and damaging the brands. Also, if its performance is at the sort of level where any human would be charged with malpractice, what recourse does the client have? If the algorithm starts spitting out random nonsense and won’t stop, can it be held in contempt? Presumably it can, but it won’t care because it’s just a program with no understanding of reward and punishment or responsibility and professionalism.
It can scour legal libraries for precedents in record time, but the way it connects or dismisses those precedents as relevant to a case has nothing to do with an understanding of the law or legal principles, it's linking repeated words and phrases and how often they're quoted and probably a bunch of other parametres that are no doubt incredibly clever but still not an actual understanding of anything.
It’s not going to prepare the case, write briefs and make filings without humans telling it to and it’s not going to operate in a court without humans feeding it the relevant data pertinent to the case and it surely needs some sort of real-time filtering in case a bug in the code makes it spout Supreme Court slash fiction. What are the qualifications and expertise and culpability of all those humans? Can any or all of them be held in contempt or guilty of malpractice, should that arise? They all have to be AI experts AND legal scholars?
These aren’t really AI, they’re the program that decides what turns up on your social media feeds let out of its cage. In terms of real AI they’re a dead-end, gimmicky and faddish, fun to play around with, but as with crypto and NFTs there’s some serious damage to be done before the bubble bursts.
Obviously, the author of this app messed up. He should have had the AI owned and operated by something outside the jurisdiction of the U.S. Courts. An Unlicensed human being can do the same thing. Even if you have the prerequisites for a law license it might be smart to operate in the same way just to avoid some liabilities. But for something where you actually have to appear in court, then this tactic does not work.
In the future, advise and writing briefs is going to be offshored, AI or no. Someone outside the jurisdiction can give you advise on subjects that Lawyers are not allowed to talk about. Like how much truth can you tell your lawyer without tying his hands. Of course, in this field reputation is everything. If your off-shore adviser gives you bad advise, you are probably without recourse. But on the other hand, as a practical matter how much recourse does the ordinary Joe have if a licensed lawyer messes up?
The same thing happened to software developers. If you just sit at your desk all day and write excellent code, then you can be offshored. But if you go to customer's sites and interact with them to find out what they really want the code to do, then perhaps it is difficult to offshore you. Indians (subcontinent), are just as smart as Americans are. And because most of them have suffered, they are more disciplined as well.
The purpose of licensing is to split the market giving the credentialed the advantage. This only works if those outside the coercive governmental power are excluded.