Ban on AI-Generated "Biased, Offensive, or Harmful Content" in Law Practice Passes California Senate, 39-0
The proposal would add a new Business and Professions Code section that would say, in relevant part (emphasis added):
It is the duty of an attorney using generative artificial intelligence to practice law to ensure … [that r]easonable steps are taken to do … [r]emove any biased, offensive, or harmful content in any generative artificial intelligence material used, including any material prepared on their behalf by others.
But legitimate advocacy, whether in court or "provided to the public," may well include content that some view as "biased, offensive, or harmful" (e.g., emotionally distressing, advocating for bad ideas or bad people, etc.). An attorney may well reasonably think that it's in his client's interest to engage in such advocacy.
As I understand it, there are no legal ethics rules forbidding such advocacy—indeed, they may mandate it, if that's what it takes to serve the client's interest. Indeed, even the proposed Rule 8.4(g), which would have forbidden certain "derogatory or demeaning" speech "based upon race, sex, religion, …," and which some courts have rejected on First Amendment grounds, at least expressly excluded "advice or advocacy consistent with [the] Rules [or Professional Conduct." This proposed statute doesn't have such an exclusion (though even if it did have such an exclusion, I think it would still be improper).
I'm not sure how the law can then forbid the lawyer from using AI to express those views. Indeed, I think such a requirement would be an unconstitutional viewpoint-based speech restriction, especially since "practic[ing] law" often involves not just creating court filings but also creating public statements on a client's behalf. And even when it comes to court filings, where various restrictions (perhaps including some viewpoint-based ones) may be permissible, it strikes me that this restriction would be highly unwise.
Likewise, under the bill a lawyer would have the duty to ensure that
The use of generative artificial intelligence does not unlawfully discriminate against or disparately impact individuals or communities based on age, ancestry, color, ethnicity, gender, gender expression, gender identity, genetic information, marital status, medical condition, military or veteran status, national origin, physical or mental disability, political affiliation, race, religion, sex, sexual orientation, socioeconomic status, and any other classification protected by federal or state law.
But what does it mean for generative AI in an attorney's work product to "unlawfully discriminate against or disparately impact individuals or communities" based on those criteria? For instance, say that the attorney uses AI to generate an argument that sharply condemns people who have a particular affiliation—is that forbidden, because it "disparately impact[s]" that "communit[y]"? Or is that OK because it's not an "unlawful[]" disparate impact? If so, what exactly would be an unlawful disparate impact of the use of generative AI (as opposed to, say, a hiring decision by the lawyer's client).
Similar rules have already been implemented as part of California State Judicial Administration Standard 10.80, but that has to do with rules for judicial officers "within their adjudicative role." Such restrictions placed on the state's own judges are a quite different matter than ones that bind all lawyers "practic[ing] law."