The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Proposed Advisory Opinion on Lawyers' and Law Firms' Use of Generative Artificial Intelligence"
From last week's posting by the Florida Bar:
The Board Review Committee on Professional Ethics will consider adopting a proposed advisory opinion at the direction of The Florida Bar Board of Governors based on an inquiry by the Special Committee on Artificial Intelligence (AI) Tools and Resources, at a meeting to be held on Thursday, November 30, 2023, from 1-3 p.m. at the Henderson Beach Resort in Destin, Florida.
There are no drafts for the committee's consideration at this time. The proposed advisory opinion will address:
1) Whether a lawyer is required to obtain a client's informed consent to use generative AI in the client's representation;
2) Whether a lawyer is required to supervise generative AI and other similar large language model-based technology pursuant to the standard applicable to non-lawyer assistants;
3) The ethical limitations and conditions that apply to a lawyer's fees and costs when a lawyer uses generative AI or other similar large language model-based technology in providing legal services, including whether a lawyer must revise their fees to reflect an increase in efficiency due to the use of AI technology and whether a lawyer may charge clients for the time spent learning to use AI technology more effectively;
4) May a law firm advertise that its private and/or inhouse generative AI technology is objectively superior or unique when compared to those used by other lawyers or providers; and
5) May a lawyer instruct or encourage clients to create and rely upon due diligence reports generated solely by AI technology?
Pursuant to Procedures 6(d) and (e) of The Florida Bar Procedures for Ruling on Questions of Ethics, comments from Florida Bar members are solicited on the issues presented. Comments must contain Proposed Advisory Opinion number 24-1, must clearly state the issues for the committee to consider, may offer suggestions for additional fee arrangements to be addressed by the proposed advisory opinion, and may include a proposed conclusion.
Comments should be submitted to Jonathan Grabb, Ethics Counsel, The Florida Bar, 651 E. Jefferson Street, Tallahassee 32399-2300, and must be postmarked no later than December 1, 2023.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Well, the stuff AI spills out is biased and often just wrong, so no lawyer should use it except as a glorified search engine, and then double check everything it spits out.
It’s hard to believe this is even a question. As a lawyer I’m embarrassed.
It’s like a kid asking his teacher, “Under what circumstances is cheating wrong?”
I would say no to point 1. It shouldn't matter much whether it's used. Obviously you can't lie to the client about it if they ask, though.
For 2, I'd supervise it to the same standard as "information I got from the Reason comments section".
For 3, I'm not quite sure what it's asking. If you're spending 2 hours on a motion instead of 4 because a program is doing half the work for you, you can't bill the client for 4 hours because you somehow feel like 4 hours worth of work was done. That's not how it works when you bill by the hour.
But if you want to raise your fees from $100 per hour to $200 per hour because you're getting twice as much done, I guess you can, if your clients are willing to pay. I certainly wouldn't say you "must" do so though.
And would you bill for time spent learning how to use a law library more efficiently, or how to search PACER? If no, then you can't bill for time learning how to use AI more efficiently.
For 4... Sure, why not. But you'd better be able to back it up.
For 5, right now you can't trust them enough to use them as more than a supplement. AI will flat out lie to you about the contents of stuff.
Treat AI as a moronic associate whose work you have to check for yourself before submitting it to a client or a court.
1. Probably no.
2. Obviously yes.
3. No and obviously no.
4. Maybe but be prepared to be sued for false advertising.
5. No (for the same reason that 2 is obviously yes).
That was easy.
Ah, but how would ChatGPT 4 answer those questions?
This is like asking whether lawyers should be allowed to use Wikipedia or Google. Silly questions.
These proposals, and others like them all suffer from the lack of definition of AI.
Spell check can be considered AI.
Grammar check can be considered AI.
Logic check (you used the word attached, but no attachment) can be considered AI.
Suggestions for how to improve your writing can be considered AI.
Suggestions for what to write can be considered AI.
Services such as, "I am a pro se litigant. Rewrite my brief to make the language sound more like what the judges expect." can be considered AI.
Services such as "summarize opinion ABC" for me can be considered AI.
Services such as "do my arguments cover all the necessary prongs?" can be considered AI.
It is a continuum. It will be necessary to draw an arbitrary line. Below the line is not AI, above is AI. And the line will have to be defined technically.
"Services such as, “I am a pro se litigant. Rewrite my brief to make the language sound more like what the judges expect.” can be considered AI."
These are rules for regulating attorneys; these would not apply to pro se litigants.
Which does not change the fact that, as Archibald notes, "AI" is inadequately defined.
There are not proposed rules; these are areas that the advisory committee is looking into.
It is rather insane to complain about a lack of definitions when there isn't even a rule yet.
I disagree. It is insane to make rules about something without knowing that the something can be defined.
I think when they get into it, they will find the definition impossible.
For sake of argument, imagine that every word processor has and AI slider control. Set on step 1, it just means spell check. Set on step 10, it means the maximum help the AI can produce. Somewhere between 1 and 10 there must be a line. Below the line, it is not AI, above it is AI. Then think that every competing word processor has its own slider with different definitions.
I compare it to a rule sanctioning lawyers for being "overly clever, or greedy." If those things could be defined, there would be numerous rules about them everywhere.
General thoughts on these proposals-
1. Yes. Obviously, it depends on the definition of "generative AI." That said, this will just end up being buried in retainer agreements.
2. Yes.
3. Unclear based on the question. A lawyer should not bill the client for time spent learning how to use AI. A lawyer should bill the client for the time spent working; if AI makes the attorney more efficient, the time spent on the case will go down. The attorney can choose to raise their hourly rate if they want to. Or take more cases.
4. I hate lawyer advertising, but this is going into possible First Amendment issues. I generally think that this would be a bad idea, but I'm unsure about what regulations would work for this.
5. No.
A question I am curious about...
What about a pro se defendant, under the following two cases:
1. If such a defendant requested access to say ChatGPT to prepare their case?
2. If such a defendant requested access to ChatGPT be made available during the course of the actual trial?