The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
A Guest Post on Plagiarism and ChatGPT
As a professor and someone who values academic integrity, I am always concerned about the issue of plagiarism. Plagiarism is the act of using someone else's work or ideas as your own, without proper attribution or citation. It is a serious offense in the academic world, as it undermines the fundamental principle of intellectual honesty and undermines the integrity of the research and scholarship that is being produced.
Recently, there has been some concern about the use of AI language models, such as ChatGPT, in the context of plagiarism. ChatGPT is a powerful tool that can generate human-like text, and it has been used for a variety of purposes, including generating chatbot responses and creating content for websites and social media. While it is certainly an impressive technology, there are also valid concerns about the potential for ChatGPT to be used to produce content that is not entirely original.
There are a few different ways in which ChatGPT could potentially be used to facilitate plagiarism. For example, someone could feed a text or article into the model, and then use the generated output as their own work, without properly citing the source material. Alternatively, someone could use ChatGPT to generate text that is similar to someone else's work, but not identical, in order to avoid detection. In either case, the result would be a violation of academic integrity.
However, it is important to note that ChatGPT is not inherently a tool for plagiarism. Like any technology, it can be used responsibly or irresponsibly, depending on how it is applied. It is up to individuals to use ChatGPT (and other tools) in a manner that is consistent with academic standards of integrity and honesty.
In conclusion, plagiarism is a serious issue in the academic world, and it is important to be mindful of the potential for ChatGPT (and other AI language models) to be used in a manner that violates these standards. However, it is ultimately up to individuals to use these tools responsibly, and to ensure that they are not contributing to the problem of plagiarism.
If you couldn't tell, this post was written by ChatGPT. I inserted this prompt:
Write a post about plagiarism and ChatGPT in the voice of Professor Josh Blackman.
Worried yet?
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The structure of ChatGPT responses is easy to spot. You're making this out to be a much bigger issue than it currently is.
Maybe you're just bad at noticing these things.
Some text generation tools can produce output that is difficult to distinguish from human-written content. While these tools may not be able to generate writing that is completely indistinguishable from that of a human, they can still produce output that is good enough to fool some people, especially if the text is short or is on a topic that the person is not familiar with. Additionally, even if a teacher is able to spot computer-generated text, it may not always be clear whether the student actually wrote the text or simply copied it from a tool.
It may not be feasible for teachers to spend the time and effort necessary to thoroughly verify the authenticity of every piece of writing that they receive. With the increasing use of technology in education, teachers may already be overwhelmed with grading and other responsibilities, and adding the task of verifying the authenticity of student writing on top of this workload could be unrealistic.
It's also important to note that while text generation tools may make it easier for students to cheat, they are not the only means by which students can cheat on writing assignments. Students can also plagiarize content from the internet or purchase pre-written essays, which may be more difficult for teachers to detect.
Overall, while it is certainly possible for teachers to spot computer-generated text in some cases, it is not always easy or practical for them to do so.
^ That's more of it.
It does seem though like a lot of its telltale signs are the same as bad student writing... overuse of connecting words, underuse of contractions, a bit repetitive... I'm not an English teacher but if I were, I'd find it hard to accuse students of being bots, even if I were pretty sure. Especially since tools like spell and grammar checkers push writing in the same direction.
I expect Chat GPT and grammar checkers to sort of merge over time... "What if you structured this paragraph like this instead?"
I think you missed the point entirely. The fear is that kids could use the output of Chat GPT verbatim.
It's true that that's not really plagiarism, since it's not copying any other person's work. It might even be considered legitimate, especially in contexts where writing skills aren't part of the exercise, similar to how some math tests are designed for use with a calculator. If you know the right question to ask Chat GPT, that's good enough.
But if you're trying to teach writing skills, even in part, Chat GPT is an existential threat. I think, just like with calculators, educators are going to have to decide what they really care about. Math teachers decided to let go of long division, for example, since... why? Maybe, as we start to be able to outsource writing to machines as well, we don't need to spend so much time on writing exercises...?
I'm not convinced. It seems different somehow. I think teachers are going to have to come up with ways of thwarting Chat GPT. At the moment, including current events is one way. (It only has knowledge up to 2021.) There are probably others, such as a requirement to include some sort of inside knowledge / context known only to the class.
“It’s true that that’s not really plagiarism, since it’s not copying any other person’s work”
The definition of plagiarism includes not only stealing another’s ideas or words, but it also includes passing off another’s writing as one’s own. So, yes, this is plagiarism.
The way to avoid reliance on bots is to have students do authentic writing in person and to get to know their individual styles well. Breaking up long writing assignments also helps.
All old tech has its uses for learning and for deeper understanding, but that doesn't make it necessary. Ancient Egyptians and Sumerians knew things about farming and pottery and the seasons that 99.99% of the populace don't even know they don't know now. How many people could even assemble a car or computer given all the parts and no manual? Is that any more useful than knowing long division? Maybe in 10 or 20 years, knowing how to write novels or legal briefs will be as useful as long division.
I think teachers are going to have to come up with ways of thwarting Chat GPT
That's the problem. It's an unreasonable ask for every individual teacher out there to come up with a way to consistently recognize ChatGPT output while grading 20-30 essays every couple weeks, when even 3rd graders can easily learn to use it to generate their essays, book reports etc. After the couple of examples here I can kind of recognize the structure too, but it will certainly get better and harder to chase down. There needs to be a tool to identify suspicious text, so that the burden isn't on teachers who may or may not be technically inclined.
Why not just require in class closed book tests? Can't implant ChatGPT in brains yet.
Yes, that’s the solution if the test is measuring subject knowledge or the ability to analyze something (legal text, poem, etc) in real time.
But sometimes you need/want students to do a literature survey or other background research, and then take the time to make their case in more polished writing, with revisions and rewrites. On this type of assignment plagiarism has been an issue since maybe 2000 BC, but at least cheaters had to go to the effort of reading enough sources to find one that was high quality but not so well-known that it would be recognized. That search in itself required, or even produced, some small amount of subject knowledge and reasoning ability. ChatGPT finds the sources, makes its own judgment about which ones are best, and then produces original writing.
"this post was written by ChatGPT. "
No wonder that it was better than the usual Blackman OP.
You should try using it for comments.
Why you few put so much effort into slagging on Prof Blackman is beyond me. "Stop making me waste so much time reading and commenting on your articles, you beast you!"
Reading in perverse fascination at melodramatic and facile partisan wrongness and then reacting to same is not a particularly rare facet of human nature.
Blackman didn't need to post this, Don Nico didn't need to comment on it, you didn't need to respond to Don Nico, and I didn't need to type this.
No provable need to do anything at all really. Philosophers have been at it for forty centuries and still can't say why there should be something rather than nothing.
I read Blackman's articles because he knows more than I do and I learn something from most of them. There are several commenters who brag about wasting their time on his articles, and some who brag about commenting on articles they don't read, and all of them love to insult his college. I respond to them to try to get an honest answer: why do they go out of their way to waste their time on somebody / something they consider a waste of time? None have given an honest answer yet. The usual response is similar to yours, that I don't have to respond either, which is just more whataboutism, and doesn't answer why they read articles they consider a waste of time even before reading them.
The answer is to a) alert readers like yourself that Blackman is the chaff of this blog, and b) alert Blackman that he's a buffoon who should either try harder or learn self-awareness and humility.
(a) is virtue signalling and arrogance.
(b) is pure arrogance.
Typical statist control-freak attitudes. Piss poor excuses.
It’s not arrogance to notice someone is a buffoon when they supply so much material. The dude thinks he invented the term “rocket docket” in 2021.
Arrogance... sure, if you prefer your comments meek. But who wants meek comments? Boring.
Virtue signaling doesn't make sense though. What virtues are being signalled? Try again.
“because he knows more than I do”
No he doesn’t.
Alphabet,
Fair question. I read the OP because i an interested in the topic and in ways to prevent cheating in my classes. My comment was that the bot written post was a better piece of writing than Blackman usually produces concerning topics about which I have an active practical interest That comment is completely germane to topic under discussion. LTG was far more explicit than I was.
Why insult his college? Because he touts himself as an outstanding legal scholar and yet he is stuck at a low tier institution. In other words, because he actively invites such criticism.
Why read his posts in the first place? Because he is often the only conspirator who is writing about a somewhat "hot topic." Hence he is the one who provokes comments and invites criticism of his own offering.
Are you satisfied about my answer. It is honest. But if you want more say so in an honest manner and I'll respond.
No, I’m not worried. Because this sounded nothing like you. It was crisp and concise, and it lacked smug self-satisfaction, poor attempts at humor, and glaring factual or logical errors. So no one would guess you wrote it.
Plus, taking the prompt and replacing Blackman with other professors came up with similarly structured entries that had the same turns of phrase.
If you can’t notice someone simply putting your prompts into the bot and putting that on the exam, then maybe you need to read your exams more closely.
Ouch!
Okay. So I thought it would be a better exercise to actually try and put a law school hypothetical into the chat to see what would happen. This is a very quick one I thought off the top of my head:
Prompt: “Suppose Frasier tells Niles that he will give him $200 if Niles divorces Maris. Niles buys opera tickets on credit in anticipation and then divorces Maris. Eddie eats the money before Frasier can pay. What claims does Niles have against Frasier? What are his damages if any? What defenses does Frasier have?”
Answer: “If Frasier and Niles entered into an agreement in which Frasier promised to pay Niles $200 in exchange for Niles divorcing Maris, and Niles fulfilled his end of the agreement but Frasier did not follow through on the payment, Niles may have a claim for breach of contract against Frasier. In order to succeed on a claim for breach of contract, Niles would need to show that the agreement between himself and Frasier was enforceable and that Frasier failed to fulfill their obligations under the agreement.
If Niles incurred damages as a result of Frasier's failure to fulfill the terms of the agreement, such as purchasing opera tickets on credit in anticipation of receiving the payment, he may be able to seek compensation for those damages. However, Niles would need to provide evidence of the damages that were incurred and how they were caused by Frasier's breach of contract.
As a defense, Frasier may argue that the agreement between himself and Niles was not enforceable because it involved illegal or unethical behavior, such as encouraging or incentivizing the end of a marriage. Frasier may also argue that the money was lost due to the actions of a pet and therefore was not available to fulfill the payment, but this would not necessarily release Frasier from their obligation to fulfill the terms of the agreement. It would ultimately be up to a court to determine the validity of these defenses and whether Frasier is liable for breach of contract.”
You can tell this wouldn’t be a law students answer because it doesn’t recite elements formulaically or address consideration. It doesn’t consider alternative theories of liability like promissory estoppel. Doesn’t CREAC. Or IRAC. Etc. So no, I’m not worried.
I wouldn't bet against that beating the curve at South Texas College of Law.
OK, I don't know about serious law schools, or any law school.
I can tell you that is significantly better than the majority of essays I get from junior and senior undergrads at a major state university. It is grammatical, on topic, and organized into a progressive sequence of ideas.
True story: a 3rd year student submitted a formal report. Significant portions were written in the informal second person: "You measure the impedance...". I returned it with a comment: "Rewrite, avoid using 'you'". He replaced all instances of "you" with "u" and resubmitted.
No meaningful content is "entirely original." However, chatGPT is a source of content, so verbatim copying, or copy editing the generated content so it is not verbatim, should still constitute plagiarism.
I can't say I'm particularly worried, but I can certainly see why Prof. Blackman is:
Let's play "Guess the ChatGPT prompt":
As a professor of law at George Mason University, I have long argued in favor of open borders as a policy that would bring significant benefits to both immigrants and native-born citizens. However, I understand that there may be concerns about the potential impact of open borders on certain communities, including gated communities like my own.
One of the primary arguments in favor of open borders is that it would lead to a more efficient allocation of labor. Immigrants tend to be willing to work in occupations that are undesirable or low-paying, which can help to address labor shortages and keep the economy running smoothly. In addition, open borders would allow for the movement of skilled workers to countries where their skills are in high demand, leading to increased productivity and innovation.
Another argument in favor of open borders is that it would lead to greater economic growth. Immigrants tend to be more entrepreneurial and start more businesses than native-born citizens, which can drive economic growth and create new jobs. In addition, open borders would allow for the free flow of ideas and knowledge across borders, which can lead to increased innovation and technological progress.
While it is important to consider the potential impact on communities like gated communities, it is also important to recognize that the benefits of open borders would extend to all members of society. In addition, there are steps that can be taken to address any potential negative impacts, such as providing resources and support to help immigrants integrate into their new communities.
In conclusion, while it is understandable to have concerns about the potential impact of open borders on certain communities, the overall benefits to both immigrants and native-born citizens make it a policy worth considering. It is important to address any potential negative impacts, but the potential benefits of open borders make it a policy worth pursuing.
I realize that this is new to law professors, but for college professors in other subjects the postmodernism generator has been around for over 25 years. Why not ask them how they deal with it?
This is interesting, but the entire thing is stilted and reads like auto-generated news stories, which have existed for years now.
The bigger concern is not that the project is good *now*, it's that it will always improve. It's easy to spot in 2022/23, but in 2030 -- or even 2024! -- it will probably be less easy to spot.
If it improves enough, there could be some real benefits. A lot of my favorite authors died (or jumped the shark) before producing enough output. Maybe that 2030 version will be good enough to simulate or even better them at novel length.
Example prompts: "A 70,000 word novel in the style of Alistair Maclean, in his first person phase, but with no more plots turning on a smashed radio." "A book of SF short stories in the style of 1940s Heinlein but excluding any embarrassing attempts at romance."
Of course it could be abused: "A novel in the style of Jane Austen, but focused on support of social equity for underrepresented groups." "A psychological novel in the style of Henry James, showing that the 2020 election was stolen by the Democrats."
The problems for education posed by this technology go much deeper than plagiarism and cheating on exams. I recently retired from forty-eight years of teaching Government, including Con Law, at a liberal-arts college, one with reasonably high selectivity in admissions. Our students generally come with high SATs and high rankings in their high schools. Over the past couple of decades, however, I witnessed an unmistakable trend: many students, and I mean many, were not able to do simple arithmetic in their heads. I recall calculations on the order of “fifteen percent of forty” stumping all but a half-dozen students in a class of twenty-five. In an introductory U.S. Politics class, I would routinely explain that an Electoral College victory requires a majority, defined as more than half, of the 538 total. So, how many do you need to be elected, I would ask the class. A few decades ago, most of the students would call out “two-seventy,” in short order. Sometime in the early nineties, the number who responded quickly and correctly began to decline noticeably. In recent years, out would come the calculators or cell-phones.
I am not a student of technology and its effects on peoples’ minds. I can only speculate, based upon my own experience and conversations with students and other teachers. My strong conviction is that the alarming decline in my students’ abilities to perform simple arithmetic operations is attributable to the widespread practice, now nearly ubiquitous, of using calculators in middle-school, and even elementary school, math classes. Students are not merely permitted to use these devices, they are encouraged, and often required. In many districts they are provided by the schools. Two-thirds of the SAT math test is done with a calculator.
Will this new AI technology lead to similarly dismaying performances with respect to skills other than arithmetic? As students increasingly depend on Google Translate to do their foreign-language homework, and this ChatGBT facility to write their essays, will the teaching profession decide, as it collectively did with calculators, to go all-in and adopt it wholesale? A friend of mine relates that his child’s seventh-grade teacher actually prefers his students to use Grammarly for their homework, because, like most of us, he really hates having to read poorly written essays. Egad!
If we’re managing to get by with most people unable to do simple calculations in their own brains, whuy not grammar and clear writing? What could go wrong?
On the one hand, yes, it’s a decline in human ability.
But it’s not as big as others we accepted more than a hundred years ago, most notably the decision that every kid didn’t need to know how to grow/raise food crops and animals.
It’s sad that college students can’t do 15% of 40 without a calculator, but the fact is they get through life without it. I imagine they’ll survive even if dependent on AI to help them express themselves in writing.
Prediction: in 10 years or less* smartphones will be able to clean up poorly enunciated teen-glish mumbling in real time and change it into a 1950s BBC announcer with a pleasing baritone and graduate level vocabulary. That’s the free app, the premium app can fix the underlying reasoning or even supply it if missing.
*Barring, as always, that Great Social and Economic Collapse we’re all expecting.
The ability to perform basic arithmetic with minimal effort allows one to recognize simple bullshitting in real time. Often the precise calculation is not necessary to tell if the answer is remotely plausible. In real-time medicine administration it is unnerving to observe anesthesia residents unable to calculate a dose correctly, and be unable to recognize that their machine assisted calculation has gone off the rails by an order of magnitude (or two or three orders of magnitude, one must always carefully observe units).
So Bill Frasure's example of students not able to calculate 15% of 40 (does this population not tip any longer?) is why physicians in training cannot recognize that the one year old, at 15% of an adult mass, should have a dose somewhere around 15% of the adult dose. Yet they will uncritically accept their cell phone calculation even arriving at a result vastly different than the recognizable appropriate answer.
We have instituted training using 'word problems' akin to standard grade school level math in order to get these post-graduate (and fully licensed physicians) up to speed.