The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

First Reference to ChatGPT in a Judicial Opinion?

|

From Hernandez v. San Bernardino County, decided Jan. 26 by Judge Jesus Bernal (C.D. Cal.):

The FAC [First Amended Complaint] veers between conclusory allegations that merely parrot the legal standard and specific examples of training that do not apply to the instant case, likely because they were copied and pasted from other civil rights cases brought by Plaintiff's Counsel. Plaintiff alleges that the County "knew that a second racial riot was imminent as a result of not moving the racially diverse inmates after the first riot. Given the known limitations of the County jail, it was obvious that County jail detention staff, including the individual defendants[,] would need special training in order to seriously address threats of violence among detainees, and ensure that inmates were not housed with other racially diverse detainees following the first racial riot." (FAC ¶ 60.) Having alleged that the failure to train was thus "obvious," Plaintiff alleges that the County "had either actual or constructive knowledge" of the problems alleged, "condoned, tolerated and through actions and inactions thereby ratified such policies," which means that "Defendant also acted with deliberate indifference to the foreseeable effects and consequences of these policies with respect to the Constitutional rights of Plaintiff, and other similarly situated." ( Id. ¶ 63.) Plaintiff proceeds to allege 15 areas of deficient training, most of them seemingly unrelated to the case at hand: "[f]ailing to adequately investigate the background, training and experience of correctional deputies and their propensity to support and facilitate violence," "[f]ailing to control the conduct of its deputies who have a known propensity of supporting and facilitating violence," and "[s]anctioning, condoning, and approving a correctional deputy-wide custom and practice of a code of silence, cover-up and dishonesty," to cite just a few. ( Id. ¶ 62.)

The problem with these allegations is not that there are too few of them, or even that they lack detail. The problem is that they read like what an artificial intelligence tool [footnote: See, e.g., OpenAI, ChatGPT, https://chat.openai.com] might come up with if prompted to allege training violations in a jail according to Twombly-Iqbal pleading standards; in other words, a result that appears facially sufficient provided one does not read very carefully.