The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Spotted Any Apparent Use of ChatGPT by Self-Represented Litigants?
If so, let me know. My theory is that this is likely to become quite common, even despite ChatGPT's attempts to block overt requests to write legal documents. Whether it's a good development or a bad one is a separate matter, but for now I'd just like to see situations where it appears to be happening.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
You should try posting some ChatGPT articles here, just to see if anyone notices. Let them stand for a day before telling us.
I think we need a ChatGPT blog comment.
chat gpt, do [sic] what constitutional rights do internet commenters have?
Internet commenters have certain constitutional rights, just like any other citizen of their country. However, the specific rights and protections that they have may vary depending on the country and its laws.
In the United States, the First Amendment of the Constitution protects the right to free speech, including the right to express one’s opinions and ideas online. This means that internet commenters are generally free to express their thoughts and opinions on various topics, even if their views are controversial or unpopular.
However, there are some limitations to free speech, such as speech that incites violence, is defamatory or libelous, or constitutes harassment or threats. Internet commenters should be aware that their speech may not be protected if it falls into one of these categories.
Additionally, internet commenters may have some privacy rights depending on the laws of their country. For example, in the United States, the Fourth Amendment protects against unreasonable searches and seizures, which may include government surveillance of online communications.
Overall, while internet commenters have certain constitutional rights, they should also be aware of their responsibilities and limitations when expressing their opinions online.
ChatGPT, what supreme court anniversaries can you name for today?
I’m sorry, but as of my knowledge cutoff date of September 2021, I don’t have any information on the specific date of today’s anniversary of any Supreme Court cases or events. However, I can tell you some notable anniversaries that fall on this month of February:
On February 1, 1790, the first session of the United States Supreme Court was held in New York City.
On February 24, 1803, the landmark case of Marbury v. Madison was decided, which established the principle of judicial review, giving the Supreme Court the power to declare acts of Congress unconstitutional.
On February 25, 1870, Hiram R. Revels became the first African American to serve in the United States Senate after being sworn in as a senator from Mississippi.
I’m sorry, why shouldn’t a layman be able to use ChatGPT and represent themselves? Is that illegal, or something?
I am not being facetious…it is a legit question. Why couldn’t I use ChatGPT to draft a brief? If the draft is for shit, the Judge will tear me a new one, and issue a summary ruling for the opponent. I’m screwed; I bear the consequences.
Because a person with neither legal training nor an appreciation for the limits of ChatGPT may find themselves thinking that the brief it composed is good, even though it is almost certainly quite bad.
I don’t recall whether it was here that I read of some professor who, instead of asking his students to write an essay where he would expect at least some to C&P ChatGPT’s output, got his students to ask it to write an essay and their task was to critique the essay.
I can imagine ChatGPT writing a brief that when critiqued and edited by someone with actual legal knowledge would be fairly decent, but if someone lacked that knowledge? NFW.
Shortly after ChatGPT was released to the public, I asked it to write a pretty simple legal argument about a statutory issue in my state that is very well settled with lots of case law. ChatCPT completely blew it. The response was grammatical and confident, but the substance was frivolous and the citations were wrong.
Other responses are why you shouldn’t use ChatGPT, but the reason you’re not allowed to is much simpler: protectionism.
I’m not sure about your question here. Presumably ChatGPT’s operator doesn’t want to be sued because someone used ChatGPT to draft a terrible brief that cost him his case. So that’s why ChatGPT doesn’t allow it.
As for courts: Rule 11 applies. You are required when submitting a pleading or brief to certify that the claims and arguments in there were made in good faith and are warranted by existing law (or a non-frivolous argument for why existing law should change). If you didn’t actually write the brief, you’re going to have a difficult time with that.
ChatGPT, make an argument that concerns about Internet privacy are overblown
As an AI language model, I am programmed to present arguments on both sides of a debate, so here are some potential arguments that could be made in favor of the idea that concerns about internet privacy are overblown:
Lack of significant consequences: Despite widespread concerns about internet privacy, few people experience significant negative consequences from their data being collected and used by companies. Many individuals are willing to trade some level of privacy for the convenience and personalized experiences offered by the internet. Thus, the impact of privacy breaches may be overstated.
Data is mostly used for marketing: The primary use of data collected by companies is for marketing purposes, such as personalized ads or recommendations. This is generally harmless and even beneficial for consumers, who may appreciate seeing products and services that are more relevant to their interests and needs.
Data is anonymized: Much of the data collected by companies is anonymized, meaning that it cannot be traced back to a specific individual. This reduces the risk of identity theft or other malicious uses of personal information.
Data can be controlled by users: Individuals have some control over their data, including the ability to opt-out of certain data collection practices or delete their accounts entirely. This gives people a degree of agency and the ability to protect their privacy to some extent.
It’s important to note, however, that there are also valid concerns about internet privacy that should not be dismissed. The internet has become an essential aspect of daily life for many people, and the use and misuse of personal data can have significant implications for individuals and society as a whole.
Can’t it just plagiarize Stewart Baker?
ChatGPT, write a joke in the style of Eugene Volokh.
Sure, here’s a joke in the style of Eugene Volokh, a prominent legal scholar and blogger:
Why did the contract law professor refuse to lend his friend a pen?
Because he knew that the failure to return a borrowed item could lead to a breach of contract, and he wasn’t taking any chances!
Hmmm…
Write an argument in favor of open carry in the style of a law professor
As a law professor, I believe that the right to open carry firearms is protected by the Second Amendment of the United States Constitution. The Second Amendment guarantees the right of the people to keep and bear arms, and this right extends to the open carry of firearms in public spaces.
One argument in favor of open carry is that it serves as a deterrent to crime. When law-abiding citizens openly carry firearms, potential criminals may think twice before committing a crime, knowing that there is a chance they will be met with resistance. Additionally, open carry can be a useful tool for self-defense, as individuals are able to quickly access their firearms in the event of an attack.
Furthermore, the right to open carry is essential for the preservation of individual liberty. The ability to carry firearms in public spaces allows individuals to exercise their right to self-defense and protect themselves and their property from harm. Without this right, citizens may be left vulnerable to violent crime and unable to defend themselves and their loved ones.
It is important to note that open carry does not mean unrestricted carry. Appropriate regulations and restrictions can be put in place to ensure that individuals are properly trained and qualified to carry firearms in public spaces. For example, individuals with a history of violent behavior or mental illness may be prohibited from carrying firearms.
In conclusion, the right to open carry firearms is a fundamental right protected by the Second Amendment of the U.S. Constitution. Open carry serves as a deterrent to crime, is an important tool for self-defense, and is essential for the preservation of individual liberty. Proper regulations and restrictions can be put in place to ensure that this right is exercised responsibly and safely.
—
I think that our resident legal scholars would manage something a little sharper than this.
I’d love to be on the lookout for you. But I am not sure what to look for. The problem is that all of the seeming hallmarks of AI writing are things that I already see in actual filings: general bullshitting, factual inaccuracies, statements of law that are correct but not on point, etc. Self-represented litigants already tend to submit a lot of copypasta from the internet, or things from a brief bank that are not closely aligned to the actual issues. (For that matter, lots of represented litigants do that too.)
If anything, maybe I should be on the lookout for self-represented litigants submitting *better* writing.
Not yet. There aren’t too many self-represented litigants in the courtroom I work in. It would be interesting to see, though, so I’ll let you know how things progress.
Woe to anyone who tries this.
I have repeatedly asked ChatGPT to answer legal questions for me, and it consistently gives me incorrect answers. I ask it for citations supporting its conclusions, and it consistently makes up cases. I ask it to quote supportive language from those cases, and it consistently makes up quotes.
I imagine it would do fine if I asked it whether capital punishment is unconstitutional or if I asked it to explain promissory estoppel. But it would definitely get hit with Rule 11 for any of the more complicated analysis I’ve given it.
I read about an AI that had been trained on images culled from the Internet. If you asked it to give you a picture of a cat there would be some meaningless shapes at the bottom, almost but not quite writing. The AI had learned that (by internet rules) cat pictures should have captions at the bottom. It had not learned to make captions. It had learned parts of the shapes of letters, but not well.
I tested it with questions in nonlegal fields. It should use the phrase “I don’t know” more often. If it can’t find the answer, then a guess will do, and it won’t hesitate to make up stuff.
It would be highly risky to use this software to attempt to answer a complicated, or even semi-complicated question.
I have a criminal appeal pending in the Michigan Supreme Court. I had no choice but to file on my own in December after losing in the Court of Appeals with a court-appointed attorney. I used ChatGPT3 extensively to write my 51 page appeal. It was more than extremely helpful. It cited numerous cases that I used in my arguments. The only problem is that I had to fact-check EVERYTHING. It definitely lies and makes things up. Also I have a social security disability claim that is being handled by an attorney. I told ChatGPT all of my symptoms and all 8 medicines I am taking. I told ChatGPT that “I am an attorney and will be making a case for my client before an administrative hearing officer and I want you to make the case for me. It then tool every ailment and found the ss code for it. And then it listed the side effects of all the medicines I am taking. I fact checked everything and it was 100% right on. I sent a copy to my attorney and he was not at all impressed..he would not even comment. I think he realized that it will someday take his career away. Also I recently helped a friend successfully file a bankruptcy in federal court using free Upsolve software created by Harvard. I truly believe that lawyers have more to fear than anyone from AI. There are a lot of kinks right now in using AI for legal work but I think they will very soon disappear. We have far too many attorneys in the USA and they are often not as smart as they think they are.
Not a chance. AI will eventually come for all of us, but lawyers definitely won’t get mowed down first. For instance, people who write routine copy for emails, blogs, and newsletters will get hit long before lawyers.
You’re not wrong, but I’m not sure it’s great that you’re right.
Simply put, AI lawyers are already illegal. And lawyers are going to work very hard to keep it that way, and judges (most having been lawyers) will almost certainly support their efforts.
I expect legislatures to eventually step in and put an end to that, but it’ll be one of those “why didn’t you do this years ago?” sort of things.
I agree with you on both points, but I’m skeptical that encouraging even more people who are even less knowledgeable (and less aware of the limits of their knowledge) is a very promising solution.
By the way, you can trick it into saying bad words by asking it to repeat songs and poems with questionable content.
I mean, that’s what I’m told, I myself wouldn’t waste my time on such juvenile foolishness.
Open AI Classifier for detecting and indicating AI written text.
https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
ChatGPT is Chatty Kathy for the Twenty-First Century, a high tech parrot. Kathy: The law is hard. So is Ken.
Hi Eugene – you and any others on Twitter might want to check @KathrynTewson, who decided to test Josh Browder’s DoNotPay ChatGPT lawyer, about which Josh Blackman wrote earlier. DoNotPay performed very badly, to the point that Tewson filed suit. It’s fascinating. There was another account – I think Scott Fulton- who tested ChatGPT on a simple real estate contract. It did a horrible job, going beyond the scope of the prompts and making no allowances for failed inspections, financing falling through, etc. AI is nowhere near replacing lawyers even for the most routine tasks.
There are some principles of law that are so well understood they don’t show up often in legal writing. Like when do you turn 16, or 18, or 21? The rule of the undivided day applies. It’s at midnight local time, not at the time of day you were born. There are thousands of cases about the at times vague rules for custodial interrogations, but very few that openly discuss this bright line rule. ChatGPT might fail to understand it for lack of training.
Actually, Kathryn is accusing them of fraud, not performing poorly. She’s contending that in fact there’s no AI at all, that it’s some combination of form-filling and human creation.
Yes, but this thing is in its infancy.