The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Plagiarism and ChatGPT

Get ready to interrogate students about their AI-enabled answers.

|

Since I began teaching, I have only given essay exams. No multiple choice. No short answers. Rather, each exam has two, complex issue-spotter essay questions. The exam is completely open-book. I always tell my students they can bring whatever they want to the classroom--nothing will help them. I also issue a regular warning: do not cheat, because I will spot similarities in writing very quickly. Over the years, I only had once incident. I found that two students had very similar answers to a particular essay question. I inquired further, and found out that the students were study partners, and had pre-written answers to questions in advance, based on what I had asked in the past. And they inserted those answers into the exam. The pre-written answers were not exactly on point, and did not receive full credit, but there was no plagiarism in that case.

Enter ChatGPT. This "chatbot" uses sophisticated technology to generate answers to questions. These responses are written in plain English, that are easy to understand, and incorporate information from a massive neural network. These responses are not perfect, but may pass muster with professors who are short on time. The temptation for cheating is real. And one professor in South Carolina caught plagiarism. He wrote about it on Facebook, and the New York Post followed up.

This technology should strike fear in all academics. ChatGPT does not work like TurnItIn, and other plagiarism detection software. The software generates new answers on the fly. And each time you run the app, a different answer will be spit out. There is no word-for-word plagiarism, or poor paraphrasing. Each answer is unique. And ChatGPT is constantly evolving. It gets smarter as more people use the system, and the neural network grows. The system was only launched three weeks ago. By May, the system will be far more sophisticated, as it incorporates everything that comes before. Like the Borg, students will assimilate; resistance is futile.

How do we deal with this emerging technology? Short answer questions are far too easy to simulate. For example, "What are the elements of X" or "Describe Y concept." A four-page fact pattern, followed by specific prompts, may also be hard to jam into ChatGPT. I think we need to think long and hard about take-home exams. It is too easy for students to use ChatGPT, over and over again, to mix and match answers. Also, any in-class exam should eliminate access to devices--only paper sources. (That is my usual policy.) Finally, we should give some serious thought to oral examinations, which cannot be hacked.

Moreover, universities should revisit plagiarism policies in light of ChatGPT. There should be explicit language that using these tools is a violation of academic integrity standards. I imagine some policies may be framed in terms of getting help from "another" person or something to that effect. ChatGPT is not a person--not yet at least. Students will argue that ChatGPT does not fall in the plain language of a policy designed to prohibit sentient-cheating. And the burden of proof to determine plagiarism may be shifted, since traditional tools are not effective. There is a real/fake detector, that uses the ChatGPT engine, but I haven't tested how accurate it is.

In the near term, all students should receive a stern talking-to about these tools. In the long run, courts may start dealing with briefs written by ChatGTP. Judgment Day is coming.