The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Ban on AI-Generated "Biased, Offensive, or Harmful Content" in Law Practice Passes California Senate, 39-0
The proposal would add a new Business and Professions Code section that would say, in relevant part (emphasis added):
It is the duty of an attorney using generative artificial intelligence to practice law to ensure … [that r]easonable steps are taken to do … [r]emove any biased, offensive, or harmful content in any generative artificial intelligence material used, including any material prepared on their behalf by others.
But legitimate advocacy, whether in court or "provided to the public," may well include content that some view as "biased, offensive, or harmful" (e.g., emotionally distressing, advocating for bad ideas or bad people, etc.). An attorney may well reasonably think that it's in his client's interest to engage in such advocacy.
As I understand it, there are no legal ethics rules forbidding such advocacy—indeed, they may mandate it, if that's what it takes to serve the client's interest. Indeed, even the proposed Rule 8.4(g), which would have forbidden certain "derogatory or demeaning" speech "based upon race, sex, religion, …," and which some courts have rejected on First Amendment grounds, at least expressly excluded "advice or advocacy consistent with [the] Rules [or Professional Conduct." This proposed statute doesn't have such an exclusion (though even if it did have such an exclusion, I think it would still be improper).
I'm not sure how the law can then forbid the lawyer from using AI to express those views. Indeed, I think such a requirement would be an unconstitutional viewpoint-based speech restriction, especially since "practic[ing] law" often involves not just creating court filings but also creating public statements on a client's behalf. And even when it comes to court filings, where various restrictions (perhaps including some viewpoint-based ones) may be permissible, it strikes me that this restriction would be highly unwise.
Likewise, under the bill a lawyer would have the duty to ensure that
The use of generative artificial intelligence does not unlawfully discriminate against or disparately impact individuals or communities based on age, ancestry, color, ethnicity, gender, gender expression, gender identity, genetic information, marital status, medical condition, military or veteran status, national origin, physical or mental disability, political affiliation, race, religion, sex, sexual orientation, socioeconomic status, and any other classification protected by federal or state law.
But what does it mean for generative AI in an attorney's work product to "unlawfully discriminate against or disparately impact individuals or communities" based on those criteria? For instance, say that the attorney uses AI to generate an argument that sharply condemns people who have a particular affiliation—is that forbidden, because it "disparately impact[s]" that "communit[y]"? Or is that OK because it's not an "unlawful[]" disparate impact? If so, what exactly would be an unlawful disparate impact of the use of generative AI (as opposed to, say, a hiring decision by the lawyer's client).
Similar rules have already been implemented as part of California State Judicial Administration Standard 10.80, but that has to do with rules for judicial officers "within their adjudicative role." Such restrictions placed on the state's own judges are a quite different matter than ones that bind all lawyers "practic[ing] law."
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
And what about offensive material the attorney generates without AI? That, apparently, is OK.
I don't even understand what supposed evil is the target of this law.
The target of this law are white, heterosexual males.
I was asking someone who wasn't insane and stupid. ("Why didn't his passport get stamped when he traveled within the United States?")
Where the hell is the ACLU?
Given their shift in the last 15-20 years, they are probably 100% in favor of this law. You should ask, "Where the hell is FIRE?"
New York. Where do you want the ACLU to be?
Apparently it's the potential evil of an AI generating mean text that an attorney agrees is effective advocacy. (I assume California is like most other places, and requires an attorney to stand behind their work product -- so if the generated mean text isn't effective advocacy, the attorney should remove or revise it anyway.)
That's my question, too. When elected officials pass a law, we want it to be to address some problem (and, ideally, something it is proper for government to address.)
What's the issue here? Not paying enough attention to AI?
"I'm not sure how the law can then forbid the lawyer from using AI to express those views."
A distinction can be made between stuff the AI says on its own, the product of a computer with no free speech rights, and stuff the human user intentionally causes to be created. Perhaps the law fails to make this distinction. Outside of legal briefs, we may need to make this distinction to give humans a chance to be heard over the noise.
Can you flesh out how you see that working?
1)I ask AI 'what is the best form of government' and it answers 'democracy is the best form of government'. I agree and say 'democracy is the best form of government'
2)I ask you the same and you answer the same, I agree and say 'democracy is the best form of government'.
3)I ponder the question myself/read a book/whatever and say 'democracy is the best form of government'.
In all three cases, the eventual speech is mine. How do you distinguish case 1?
I was under the impression lawyers were on the hook for stuff they submit, like a doctor or pharmacist. You know, the professional in professional.
In that sense, all submissions, whether AI, lawyer, paralegal, or TV show, are undergirded by a recognized authority signing it, and held to account.
Which reminds me, to help cut the costs of legal advice, there should be fast track paths to import lawyers, just like there is for doctors, nurses, and softward engineers.
I mean, if stuff where fuckups can kill (doctors, nurses, software engineers) are fine, how much more so for fluff jobs like lawyering.
You'd think they would start with ensuring that attorneys who use AI ensure that all cited cases actually exist.
A better start would be to forbid billing for any "work" done by A.I.
California Legislature Pushes Unconstitutional Bill.
In other News At 11, Water is Wet, Puppies are Cute and Britain Mistakes Orwell's 1984 For An Instruction Manual.
I have been around since luddites were pissing and moaning about PGP. That horse has been out of the barn for quite some time.
Such a law would violate the Ohio Constitution, which grants complete authority of the practice of law to the Ohio Supreme Court