The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Plagiarism and ChatGPT
Get ready to interrogate students about their AI-enabled answers.
Since I began teaching, I have only given essay exams. No multiple choice. No short answers. Rather, each exam has two, complex issue-spotter essay questions. The exam is completely open-book. I always tell my students they can bring whatever they want to the classroom--nothing will help them. I also issue a regular warning: do not cheat, because I will spot similarities in writing very quickly. Over the years, I only had once incident. I found that two students had very similar answers to a particular essay question. I inquired further, and found out that the students were study partners, and had pre-written answers to questions in advance, based on what I had asked in the past. And they inserted those answers into the exam. The pre-written answers were not exactly on point, and did not receive full credit, but there was no plagiarism in that case.
Enter ChatGPT. This "chatbot" uses sophisticated technology to generate answers to questions. These responses are written in plain English, that are easy to understand, and incorporate information from a massive neural network. These responses are not perfect, but may pass muster with professors who are short on time. The temptation for cheating is real. And one professor in South Carolina caught plagiarism. He wrote about it on Facebook, and the New York Post followed up.
This technology should strike fear in all academics. ChatGPT does not work like TurnItIn, and other plagiarism detection software. The software generates new answers on the fly. And each time you run the app, a different answer will be spit out. There is no word-for-word plagiarism, or poor paraphrasing. Each answer is unique. And ChatGPT is constantly evolving. It gets smarter as more people use the system, and the neural network grows. The system was only launched three weeks ago. By May, the system will be far more sophisticated, as it incorporates everything that comes before. Like the Borg, students will assimilate; resistance is futile.
How do we deal with this emerging technology? Short answer questions are far too easy to simulate. For example, "What are the elements of X" or "Describe Y concept." A four-page fact pattern, followed by specific prompts, may also be hard to jam into ChatGPT. I think we need to think long and hard about take-home exams. It is too easy for students to use ChatGPT, over and over again, to mix and match answers. Also, any in-class exam should eliminate access to devices--only paper sources. (That is my usual policy.) Finally, we should give some serious thought to oral examinations, which cannot be hacked.
Moreover, universities should revisit plagiarism policies in light of ChatGPT. There should be explicit language that using these tools is a violation of academic integrity standards. I imagine some policies may be framed in terms of getting help from "another" person or something to that effect. ChatGPT is not a person--not yet at least. Students will argue that ChatGPT does not fall in the plain language of a policy designed to prohibit sentient-cheating. And the burden of proof to determine plagiarism may be shifted, since traditional tools are not effective. There is a real/fake detector, that uses the ChatGPT engine, but I haven't tested how accurate it is.
In the near term, all students should receive a stern talking-to about these tools. In the long run, courts may start dealing with briefs written by ChatGTP. Judgment Day is coming.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
How to prevent it? Prohibit using a VPN, and have a sniffer on the network looking for such communication with the outside entity. All of their internet communication during the exam could be logged, and later searched if there was questions. None of these kinds of complex AI software can be run on a labtop, they require outside communication (at least for the foreseeable future).
The problem with that is anyone can still hit the internet via their phone, or set up a hotspot on their phone for a laptop. I'm not really sure how you can prevent that.
I suppose you could deploy a cellular signal jammer but heaven help you if you get caught with one of those.
(They can be fun at bars or restaurants to terminate the calls of people around you who are acting self important).
Forbid the use of phones in the exam room with a policy of an auto-fail for any detected use of a cellular device.
As much as I agree, it's pretty hard to detect without a physical search, and that arises all of it's own problems.
Come on; Just make detecting use of cell phones a class project for some EE's. It's not like doing it is technically challenging, they wouldn't even have to be grad students.
It does mean proctoring of the exam. I don't think that you can prevent a person having a cell phone in the room.
I don’t think that you can prevent a person having a cell phone in the room.
One things we try to do in high schools is to watch each student turning their phone off and putting it in their bag/backpack, and then taking the backpack to another part of the room. They could have a second phone or we might miss that one student didn't do that, but it helps.
Take the phone out of your bag. Get an F on the exam. Get out of your seat without permission. F for cheating.
No system can prevent cheating 100% but several can reduce the number to a vert tiny percentage. When i doubt the students paper is confiscated and an oral exam is scheduled.
Obviously ALL rules must be broadcast up-front.
I am a public high school teacher. I have proctored standardized exams dozens of times. The strict no cell phone policy isn't as difficult to enforce as you suggest if the proctor(s) really do monitor students constantly. It is part of the rules for us not to use any electronic device except as necessary to administer the exam or to communicate with testing coordinators about problems. We cannot divert our attention from the students to grade papers or do any other work. We literally are expected to be watching them and circulating among them the entire time. (And rules for tests like the SAT an ACT specify at least 1 proctor per 30 students, have rules about how they are to be seated, and so on.) Students may occasionally get away with using a phone if they are careful, even when the proctor is diligent, but they will get caught and their scores cancelled more often than not, which is a fairly effective deterrent.
It sounds like you usea proper and effective procedure
There are students with legitimate need of cell phone - parents come to mind
It turns out that for the first 40 or 50 years of Homo sapiens' existence, parents were able to do without cell phones.
All test takers could be given the number of a central “emergency” line where operators can reach the lead proctor on their cell phone if needed and pass on a message.
A concerned parent could alter their OGM before surrendering their cell phone to say something like
The “emergency line” operators would screen calls (and perhaps make suggestions such as “Have you considered calling 911 instead?”. If the caller persists, the operator would get the name and phone number of the test taker and match that to current test takers, and if located, give the caller an opportunity to leave a message which will immediately be forwarded to the lead proctor who would then approach the test taker and give them the option of withdrawing from the test in order to hear the message (perhaps the proctor could give them their own summary of the message to help the test taker make the decision).
Perhaps a test taker could also specify that anyone calling the emergency line must provide a PIN specified by the test taker to the “emergency line” operator in order to continue. The test taker could provide this PIN and the emergency line number only to those they are willing to withdraw from the test to talk to.
"I suppose you could deploy a cellular signal jammer"
DO NOT DO THIS!!!! The problem is that campus police routinely use text messages (instead of radios) to communicate and you may WANT your campus police able to communicate to/from your classroom.
Forget classroom shooting, take a basic medical emergency. A stressed out student goes into premature labor -- wouldn't you like some help on that????
The solution to these is very simple. Print out a hard copy and (cleanly) take out every fourth or fifth word with a black marker. Then ask the student to read the essay to you.
If the student really wrote it, he/she/it is going to get most of it -- there may be a little bit of stumbling as this is also an additional edit and the kid may be thinking of a better way of having written the paragraph. This is where you use your judgement.
But if the kid didn't write it, likely didn't even fully read it, he/she/it ain't gonna have a chance. Not a chance.
NB: Yes, take out the big words, but also the conjunctions and such. The best way to do this is every fourth word.
The best demonstration is to try it yourself sometime -- something that you did NOT write but have read in the past, such as pablum from the campus president. Have a buddy take out every fourth word and then listen to you reading it OUT LOUD. (silent reading involves skimming and you might not notice this.)
> take a basic medical emergency.
It may shock you, but people did manage to survive before cell phones.
Yes, but people also survived before there were ambulances or any kind of phone, right? But also, people died that would have lived had those things existed. That is the point here. One of the benefits of ubiquitous cell phones is the degree to which instant communication in emergencies is facilitated. There is no doubt that people live that would have died without them being available. There better be a life-or-death emergency to justify jamming cell phone signals in an area or anyone that was denied life-saving intervention by one was murdered just as surely as if someone had slashed the tires of an ambulance that could have saved a person.
If you must, have a single network patch cable into the room. Now, internet, calls, etc.
>murdered just as surely
LOL.
This is ridiculous. Should they not have classes on the second floors of buildings because it would take first responders longer to get there in an emergency than a ground floor classroom? We do not need to design policies against the most far-fetched scenarios.
Seen a payphone lately?
"I suppose you could deploy a cellular signal jammer"
Cellular signal jammer's are a violation of federal law.
Devin,
Is it easy to block access to ChatGPT?
One might also require a keystroke sniffer on any device used.
Send a picture of the question to your off-campus helper.
I could set up an innocent-looking server which relays your question to the forbidden server. Mail services would have to be forbidden too, to prohibit emailing questions to helpers. Sniffers don't tell you the content of secure connections, only the endpoints.
Turn the exam room into a big Faraday Cage.
Closed book, handwritten exams and/or oral exams are all possibilities.
No take home exams.
No cell phones, tablets, computers. ETC.
Very long exams with insufficient time to answer all questions.
Almost all choices make the work of the professor greater.
You will run afoul with ADA on that -- All of those are "reasonable accommodations" that I've seen students given -- including having the exam proctored by the DisServ staff in their office.
Let someone complain, and I'll find an accommodation that assures me that there is not cheating.
The easy answer is multiple-choice exams. And, to be honest, they've long been the more fair mode of testing i.e., objective and consistent grading.
It doesn't hurt to make explicit that the use of such tools is forbidden, but have you considered simply trusting the students? I went to college at a university with an honor code; students were presumed to act honestly. Exams were not proctored, for instance. It was kind of a shock when years later I went to law school and saw students treated like presumptive cheaters. As lawyers, we are trusted with our clients' lives and livelihoods. We can make representations as officers of the court and have those representations accepted. And yet, as aspiring lawyers, we aren't trusted to have any integrity at all. (Yes, yes, insert lawyer jokes here.)
David,
Honor codes work best whee the penalties for violation are severe as in the military academies. Get caught and you're expelled.
We had an honor code where I went, and the penalty was expulsion, except suspension was possible for a first offense.
My impression was that it was obeyed, but of course we didn't have electronic cheating methods available. We had to chisel our answers on rock slabs with a stone tool.
I'm not sure expulsion was the first go to penalty, but definitely suspension. And, obviously, automatic flunking.
By the way, part of the honor code was that students were supposed to report any violations of the honor code by others that they became aware of; failure to do so was itself a violation.
That was part of ours too.
"We had to chisel our answers on rock slabs with a stone tool."
Thanks for the chuckle
In the universities I went to it was pretty much the "trust the student" approach and, yes, there was some cheating in exams but I think it was minimal.
Of course this was before mobile phones and we didn't even chisel our answers on rock slabs with a stone tool because neither rock nor stone had been invented yet.
"we aren’t trusted to have any integrity at all."
The profs are lawyers so they know better than to trust aspiring lawyers hungry for better jobs by doing better on exams.
Tested it out on a book review I assigned this past term after being somewhat suspicious/surprised by one student's submission. ChatGPT's result was quite short and didn't generate any meaningful analysis, but in terms of summarizing the book it was better than a good third of the class. So, yeah, this is a problem that's only going to mushroom as the AI improves. Yuck.
And if you'd gone back and asked it for longer it would've written more. It has plenty of areas where it has issues, but it can respond well to certain requests/instructions
I would hire the ChatGPT team to develop software to detect the use of ChatGPT answers in an exam.
It would be really easy.
You don’t need a software team to detect ChatGPT use. Anyone can do it, and very easily.
The problem is detecting it correctly. For that, you might find the ChatGPT team isn’t as helpful as you’re assuming.
I does occur to me that I could use this ChatGPT tool to facilitate writing a book once I create detailed storyboard version. I often get requests to write long pieces and the software might be a way to speed up the process especially as I type so slowly.
(Still I must admit that the speed of production has never been limited by my typing but rather by my deciding what to say in the text.)
Normally I view these sorts of concerns with new technology with skepticism, but I have to admit that the Chat GPT bot will indeed make things very difficult. If you doubt it, just find a sample law school essay prompt (it can even be a huge fact pattern followed by multiple questions), then copy and paste it into Chat GPT. The bot will immediately start composing a very plausible full-length response.
I only have used Chat GPT once but got what I consider a bullshit answer. True it was on a somewhat esoteric subject ( a TrackingPoint XS2 Rifle) but here is the long winded answer that is basically useless since all it says is go do research over and over.
“The TrackingPoint XS2 Rifle sounds like a high-tech, advanced firearm with a number of impressive features. However, it’s important to note that the effectiveness and usefulness of a rifle will depend on the specific needs and preferences of the individual shooter. Some people may find the features of the TrackingPoint XS2 Rifle to be exactly what they’re looking for, while others may not feel that it meets their needs.
In general, it’s important to carefully consider your needs and preferences when choosing a rifle. This may involve considering factors such as the intended use of the rifle (hunting, target shooting, etc.), the caliber of the rifle, the accuracy and reliability of the rifle, and the overall build quality. It’s also a good idea to read reviews and gather information from other shooters who have experience with the rifle in question.”
While I understand it is possible for Chat GPT to improve over time currently it seems to only spit out bullshit.
I agree.
I just tried this:
In baseball, a runner is on third base and the batter hits a fly ball caught by an outfielder. When may the runner try to score?
In baseball, a runner on third base may try to score when the batter hits a fly ball that is caught by an outfielder. The runner will typically start running as soon as the ball is hit, and will try to reach home plate before the ball is thrown back to the infield.
The decision of whether to try to score or not is typically made by the runner, based on their own speed and the distance they think they can cover before the ball is thrown back to the infield. The runner will also consider the location of the outfielder who caught the ball, as well as the positions of the other fielders, to determine their chances of success.
If the runner successfully reaches home plate before the ball is thrown back to the infield, they score a run for their team. If the runner is tagged out by the fielder with the ball before they reach home plate, they are out and the inning is over.
The bolding is mine, and is intended to show plain factual mistakes.
From playing around with ChatGPT, it’s not particularly good when it comes to law. I’ve seen similar complaints from experts in technical fields as well. I don’t think it could come up with a good answer for most law school essays without significant revision from the student and knowledge of the subject matter. BSing an essay about a famous philosopher and philosophy concept is a lot easier than explaining diversity jurisdiction in 1860 when Dred Scott is the controlling case law.
I would have to investigate the code in depth, but I suspect that once ChatGPT builds up a sufficient legal knowledge database, its answers will improve.
The ChatGPT analysis of the Hasmonean Revolt was atrocious, but the flaw lay in the knowledge database.
ChatGPT only contained information, which related to the moronic and completely wrong Hanukkah story, while I know the historical facts.
I don’t think a larger database helps. For one thing, Con law classes are only going to pull from a small subset of seminal cases. A larger database doesn’t help.
For briefing at large, understanding why you’re citing what you’re citing and why is pretty important. A case may be cited for dozens of reasons by other courts (jurisdiction, procedure, issue x, element y). You also need to know what facts from the previous case are analogous to your case. This is generally why Boolean searching Westlaw is superior to its normal results on everything except the highest level basics of the area of law.
If you think of technological advancements as a problem for your profession, you’re on the wrong side of the future. You better be thinking of them as tools you’re going to use.
You better be preparing your students to use the new tools to their advantage. Otherwise you’re preparing your students to do work with rapidly declining value.
Good observation. I suspect some lawyers are already using ChatGPT for initial drafts of some (simple) pleadings.
If I were a student today, I'd be using ChatGPT to at least start essay assignments.
I highly recommend Grammarly, another AI tool, although not free! [Confession: I once taught English composition and always believed I was an excellent writer. Not so. Grammarly makes my writing much more effective!]
That's what I was thinking (IANAL) and trying to figure how this would apply in, say, ten years, when ChatGPT has gotten so much better that it passes Turing Tests.
Suppose it is so good that it can write better law clerk papers than 90% of the law clerks, including applying whatever bias you tell it, to simulate different judges. Judges would be fools to not recognize that.
Getting back to tests, the solution seems to me to don't care how they answer the question, but to change how tests are done. Process each student one by one. Give them half an hour to write their answer, then take the answer from them, remove all their aids -- no dictionaries, no law books, no computers -- and ask them to summarize what they wrote, defend what they wrote, and attack what they wrote.
I can't think of any other way to test the students themselves. Yes, a lot more work, but seems to me this one test at the end of the course could replace all testing during the course.
I like tests where you give some answers and then ask which one is wrong (or best) and why. Hard to cheat and tests basic ability.
I have sometimes wondered what would happen if college were free, or at least a lot cheaper, and testing were entirely separate. Take all the classes you want, but pay through the nose for realistic testing.
A lot more people would take college classes for their own personal benefit and not care a bit about testing.
The people who wanted certificates and degrees to prove they’d learned something would think long and hard about paying $100 for a testing company with the reputation of being easy to cheat, and $1000 for a testing company which had a reputation for catching and publicizing cheaters.
Employers who only wanted a degree, any degree, might not care which testing companies were used. Employers who wanted real knowledge would care very much, not just for the results, but for hiring someone who also wanted to show real knowledge.
Using the tools is one thing, evaluating the understanding of a topic is another and this is clearly about the later. Doesn't matter what the computer can do if you cannot correctly articulate the problem set or know what the correct answer should look like.
I expect tech to come up with lucrative, highly imperfect solutions to the various problems caused by tech. I believe this is what they call ‘innovation.’ All that really matters is that somewhere along the line, people will get fired, or paid a lot less for doing the same job they did before PLUS fixing other problems caused by the solutions.
If you view technological advancements only as useful tools without concern for the problems they can also cause then you're setting yourself up to be a victim of your own blind optimism. A chainsaw is a great tool for taking down a dead tree, but only if you're aware of how dangerous it can be and take steps to mitigate the risks its use entails.
Or you could just declare chain saws unethical and insist everyone use hand saws until you become irrelevant.
Or I could implement something more intelligent by restricting use of chainsaws by a certain group and/or in certain situations unless/until the safety issues are addressed.
Then everyone who follows your rules will be at a big disadvantage to everyone who breaks your rules.
So you better hope the people breaking your rules get badly hurt by chain saws. Otherwise everyone on both sides of your rules will see that their lives would be clearly better without your meddling.
Best case scenario: your rules make sense and people would be seriously injured by ignoring them. But if that’s actually true, then, absent your rules, we would see the injuries and learn to avoid them by making our own rules. That makes your meddling redundant to obvious caution. Either way, meddling is extremely questionable. We don’t need you to be our mom.
Then everyone who follows your rules will be at a big disadvantage to everyone who breaks your rules.
Except that my rules only apply to those for whom I have the authority to set such rules, not society in general. For instance, if I own a company where the work involves a lot of serious tool usage I get to dictate who can use what tools for what purpose. That's not me meddling with your freedom to make stupid choices for yourself. That's me managing my company.
Or maybe I run a school and want to mitigate the risk of students cheating on exams...
"Except that my rules only apply to those for whom I have the authority to set such rules..."
Many of the people who like to dream up new rules don't even consider that authority might be subject to such limits. Because they've decided they are smarter and more important than everyone.
Yea definitely don't let Freddy Kruger get hold of a chain saw.
Kinda like saying if guns are outlawed only outlaws will have guns. Probably work as well as the war on drugs since that war has reduced drug use so much.
Your analogy seems to support the opposite of what you intend.
Safety measures, such as anti-kickback features, in chainsaws are put in place to protect the USER of the chainsaw, not other people. Nothing in a chainsaw's safety measures prevents you from, through neglect or malice, downing a tree on your house or using it in a dorm to dismember a bunch of co-eds.
ChatGPT will continue to strive to make the tech better to provide answers that are both more correct and harder to spot as plagiarism (features for the user's benefit). The fact that students might misuse the tool in ways contrary to professor's wishes...
Your analogy seems to support the opposite of what you intend.
Only if you creatively/selectively interpret it towards that end, which you seem to have done.
Safety measures, such as anti-kickback features, in chainsaws are put in place to protect the USER of the chainsaw, not other people. Nothing in a chainsaw’s safety measures prevents you from, through neglect or malice, downing a tree on your house or using it in a dorm to dismember a bunch of co-eds.
Here's what I actually said...
"steps to mitigate the risks its use entails"
You are aware that there are many steps one can take to mitigate the risks involved in using a particularly dangerous tool (including dangers to both people and property in proximity to the users of that tool) that have nothing to do with built-in mechanical features, aren't you?
"Only if you creatively/selectively interpret it towards that end, which you seem to have done."
Correct, but I am not the one who compared ChatGPT to a dangerous tool to make a point. You could have said a nerf football, but choose instead to compare dangerous tool which, when misused, risks injury, dismemberment, and death.
“steps to mitigate the risks its use entails”
Yes, I am aware. Something as simple as safety googles while using one is most highly recommended. My point was that the chainsaw manufacturer cannot mitigate those risks; much like the those responsible for ChatGPT cannot remove all risks of (mis)use by the USER.
Sure, a company who employs chainsaws will certainly have safety measures in place greater than the built in features. Glasses, clothing, training, ropes and pulleys, spotters, etc. to reduce risks.
But again, the analogy fails because it doesn't address the SCHOOL/PROFESSOR part of this equation. They are not using ChatGPT, the students are. They don't control ChatGPT, a 3rd party company does. The university cannot control what a student does with ChatGPT any more than Echo can stop a neighbor from dropping a tree on your roof.
It's not a solution to the general problem, but for the immediate one, have you considered reaching out to its makers? They seem very concerned with the ethical ramifications and may be willing to help. For example, perhaps they could open the bot's recent logs to your school's plagiarism detection software, or let you submit topics that the bot will refuse to talk about during the exam time window. Or something.
The general case of chatbots vs academia is likely to be complicated, but this specific chatbot's issues can probably be addressed collaboratively.
What strikes me is the similarly between questions raised by ChatGPT and those in the case where copyright was asserted over a photo taken by a monkey. The monkey was treated as having created the photo unassisted (even though the camera was set up on purpose to enable it), so the camera owner was denied copyright.
So how do we assign responsibility for an essay written by a chatting computer program? Is it considered to be unassisted, so that no human is responsible? If not, who is responsible? The author of the ChatGPT program? The person who typed a request into ChatGPT and pressed "Run"? Someone else? Does it matter if the person who ran the program paid for the program or stole it?
In the professor's shoes, I would be inclined to cut the Gordian knot by decreeing that any student submitting the output of a program like ChatGPT is not the author and so has plagiarized it -- unless he can show that he wrote the program by himself.
No modern AI can possibly qualify as a creator of an original work under copyright law or as an inventor under patent law.
AI is analytic and works via huge databases and raw compute power.
For valid creativity and inventiveness, synthetic sapience would be required. Researchers are only just beginning to grapple with the problem of creating synthetic sapience.
My son and I created a whole comic universe feeding ChatGPT the right prompts then asking the right questions.
After awhile you hit its limits but the content wasn't bad at all.
I think you demonstrated how formulaic a comic universe can be. ChatCPT did not create an original work. You and your son did with the help of ChatGPT.
Good point.
Therein lies the solution to the professor's question. Students smart enough to criticize ChatGBT's answers and to refine the query to produce better quality results are using their brains and their training.
So, prepare to teach the nextgen students who have AI in their bag of tools. Just like engineers of my day carried slide rules in their bag of tools.
What ChatGPT does with the contents of that big database, rap artists who use "sampling" already do with other people's work, and copyright the result and are upheld. What's the difference?
Simple: ask ChatGPT if it wrote the essay and feed it a suspicious sentence as the prompt.
I've found that, at its present stage of development, the ChatGPG can not only get facts wrong, but sometimes it cites made-up academic papers. I tried this with some historical facts which were a bit beyond the obvious (but not too much).
So until that improves, would-be cheaters might want to consider doing something else, like study the subject.
I always found take-home exams the most like real practice. Memorization is not something lawyers are expected to have. Understanding of the basic concepts, and applying them to fact situations is the skill that lawyers should be tested on. I wonder if the AI can demonstrate these skills, or if perhaps better tests can weed out the cheaters from the non-cheaters.
These systems will almost certainly be able to do that in the next few years.
Require students to dictate their answers.
Jasper to generate bullets, ChatGPT to generate content, then grammarly to clean it up.
Easy workflow for quick content.
This is ChatGPT's own solution:
There are several strategies you can use to prevent students from using ChatGPT or other language models to cheat on assignments or exams:
Use plagiarism detection software: Plagiarism detection software can help you identify instances where students have copied text from a language model or other source.
Make assignments and exams open-book: Allowing students to use their notes and other resources can make it more difficult for them to use a language model to cheat, as they will still need to understand the material and apply it themselves.
Use essay questions or open-ended responses: Assignments or exams that require students to write essays or provide open-ended responses may be less susceptible to cheating using a language model, as the model may not be able to generate a coherent response that addresses the specific prompt.
Use multiple versions of exams: Creating multiple versions of an exam can make it more difficult for students to share answers with each other, as they will not be able to rely on a single shared version of the exam.
Educate students about academic integrity: Helping students understand the importance of academic integrity and the consequences of cheating can encourage them to be more honest and responsible in their coursework.
Ultimately, the best way to prevent cheating is to create a culture of academic integrity within your classroom, and to consistently communicate and reinforce the importance of honesty and integrity to your students.
"Use multiple versions of exams: Creating multiple versions of an exam can make it more difficult for students to share answers with each other, as they will not be able to rely on a single shared version of the exam."
That works wonders in high school. I had 5 answers to multiple choice questions. with the right one, a close one, two wrong ones, and a bizarre one -- e.g. "Richard Nixon resigned the Presidency because of his involvement in a homosexual love triangle."
Now this was 30 years ago -- I'd get in trouble for that specific "wrong" answer now, but I asked the kids who got that answer why they thought it was true. True embarassment....
Then you can print three copies of the same exam on different colors of paper -- pink, cream, & light blue -- and carefully hand out different colored ones as if they WERE different. Only you know that they aren't....
There also is a different first initial for your name on the various versions of exam, which doesn't work if the student KNOWS your first name.
My favorite was to prime the students with a lecture on how humans fail to be random and nearly always fall into patterns then soon thereafter give a quiz or multiple choice exam with obvious patterns in the correct answers.
A B C D C B A or A A A B B B C C C
It was so funny during the exams
"Richard Nixon resigned the Presidency because of his involvement in a homosexual love triangle."
Do tell. Who were the other members of the menage a trois?
Oh, that is a good one.
I don't know the name of the third, but I've always "felt" I knew the name of the second.
/tasteless joke
John Dean and Henry Kissenger.
Actually I think a single version of the exam would be most likely to trip up ChatGPT because if more than one student is using ChatGPT, then their answers will likely be substantially the same.
However it brings up the question, does this technology make lawyers obsolete?
I'd trust a AI lawyer a lot more than I'd trust an AI truck driver.
In my very limited experimentation chatbot got simple facts wrong, sometimes seriously wrong.
Maybe it will get better.
Anyway, would it help to submit the questions to chatbot, once just before the exam and then right afterwards? Maybe comparing your results to student answers would be informative.
I have no problem with anti-plagiarism tools in general, but I find tools like Turnitin very unethical. They archive any paper which is submitted to it so those papers are then stored in their service. Yes, Turnitin can't publish it themselves and the one who wrote the paper still has copyright, but Turnitin is still making money off of every paper that has ever been submitted to it without compensating the writers of the papers.
How is Turnitin not a copyright violation?
IIRC, there was a lawsuit against one of these companies on that issue.
Because it's quintessential fair use.
Which is also the rebuttal to starlord's point. The whole point of fair use is that you aren't required to compensate authors if you're using their work in that way.
David,
I what way is it fair use as the output generates revenue for the website? I don't see it as an affirmative defense.
Explain further
Because that is just one factor of a multi-part balancing test.
Most fair use — or at least most that's litigated — generates revenues for the fair user. Not sure why you think that's a significant problem. It's fair use because Turnitin's use is completely transformative, has no effect of any sort on the (nonexistent) market value of student works, and is in aid of the educational mission of a school.
This issue was discussed here on the VC (back in the pre-WaPo days); the Fourth Circuit found in favor of the turnitin service.
How is Turnitin spewing for another's work transformative?
Of course the student does not get monetary value, but presumable the Turnitin website or app server does.
"is in aid of the educational mission of a school."
I don't but that either. Rather it confounds the educational mission of the school.
In any case the student should fail the course.
I think there's a typo there, but if it's intended to say "spewing forth," I'm not sure you're clear on how Turnitin works. The only thing Turnitin "spews forth" is a report on whether a student plagiarized the paper he just turned in.
It's transformative because it completely changes the purpose and use of the document. The student is writing the paper for a class assignment. Turnitin is copying the paper solely for the purpose of checking whether plagiarism occurred, now or in the future.
What? How does deterring and/or catching plagiarism confound, rather than bolster, the educational mission of a school?
David,
You're correct. I did misread and misinterpret the original function of Turnitin.
As long as Turnitin does not redistribute a text in a substantial manner, I can see that the storage of texts can be defended as fair use as long as the text has been acquired consistent with the publisher's distribution policy.
Some plagiarism detection software (such as that used by my publisher) do make large tracts (pages) of documents available to the paying user. I do not see that is practice as transformative and is clearly a commercial use.
So my question remains, even though I misunderstood the original use of the software.
It's not quintessential fair use, as such use is very recent. And it does not fit into the statutory examples, "criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research." 17 USC 107.
Whether it's fair use is debatable.
I suppose anything is debatable. But the arguments against fair use are really bad. (And were rejected by the Fourth Circuit, as I mentioned above.)
It's transformative, of a piece with the Google Books and Perfect 10 cases.
The only way to know is the sue the company.
Well, no, that's not true.
But in any case, that already happened, and Turnitin prevailed.
Looking to the future there is likely no one with economic interests in the manuscripts who will sue Turnitin, because the company makes iThenticate which is used by all the major publishers for their editors to check for plagiarism. In that case the publishers and the software company have a common commercial interest.
With respect to student ethics, the the Turnitin website advertises to students that it will improve the citation basis of their work. It is of very questionable ethics to cite works that have not actually been used in writing an academic analysis or journal submissions. Citation stuffing is the flipside as omitting reference literature that has been used in a formative way to structure the analysis in manuscripts. Both are dishonest.
"In the long run, courts may start dealing with briefs written by ChatGTP."
I'm having a bit of trouble seeing the problem with this. You want students to do the work themselves, because the goal is to learn, not the work product itself. You want attorneys to produce good work product, it is largely irrelevant to the end goal how they did it.
I think the issue is generalizing the skill.
If the fact pattern in real life were always the same as on the exam, fine, but of course it won't be, and if the student can't think it through he's not going to be a very good lawyer.
My impression, which I'm sure many will correct if I'm wrong, is that the AI doesn't actually analyze things. It just finds relevant text and strings it together. When I tried it I asked it to compute the correct score on a bridge hand where I described the result. This involved knowing the scoring rules and then doing some very simple arithmetic. It was unable to do that correctly.
That said, it's not clear that using chatbot for boilerplate sort of stuff is any different than copying and pasting boilerplate you've already used.
I certainly wouldn't recommend that anybody just use the chatbot and mindlessly file the output. But it might be useful as a first pass, and for simple stuff, a first pass might be all that's required. And it's going to keep getting better.
I'm a mechanical engineer, I've been using CAD since I started my career in the early 80's. When I started the user was responsible for every detail. The details kept getting more complicated, as the simple stuff got automated. And let me tell you, patching NURBS surfaces by hand to get a good result was a real pain.
As things have progressed the software has automated more and more of the process. (Like that surface patching!) For some routine stuff I just pick a particular approach, and enter the relevant numbers, hit refresh, and then do a sanity check on the results. And usually it's good.
I think eventually the software will be doing the whole job, for specified types of designs.
I suspect this is a matter of simply having to adapt to new technologies. Whether we like it or not as we move forward all professions, yes including the legal world, will be increasingly outsourced to AI and automation. Everything from motions and pleadings to paralegal research will be done by machine alone. In the end does it really matter so long as the arguments are effective?
Ultimately even the best AI is only as good as the questions and data fed in, that is where the real skill lies. Eloquence is probably not a skill required as we move forward, a machine is going to outtalk even the best speaker
In the end an AI will end up being just yet another "Expert Witness". After all, lets say there is a lawyer who is handicapped on a Stephen Hawking level. Would we bar them if someone else does the talking for them?
Agreed.
Why not test your students on their ability to generate a prompt that produces a solid ChatGPT answer? Doesn't checking the work produced by an AI chat bot have some similarities to being a partner checking the work on an associate lawyer? Or an associate lawyer checking the work of a law school clerk?
Great idea
This post and the comments here have Medieval scribes declare moveable type "unethical" energy.
And what type of energy do todays printers give off?
Unethical energy, obviously.
Anything that might threaten the social or financial value of the in crowd is super unethical.
AI of the future will make todays printers look like saints.
This is the best comment section I've read here in a long time. Thanks to all of you.
Agreed!
Let me ruin it for you:
-- Trump is worse than Hitler.
-- No he's not, Biden is.
😉
Everyone’s help is needed, if we are not to be outsmarted by the AI
For exams when I was at law school my school allowed us to do it by hand or use a computer. The computer you had to install a specific program (Exam 4) and when this was activated and used you were locked out of the rest of the computer and internet (if it was open book there was an option that turned this feature off). I don't see why ChatGPT would be an issue if they go this route
Sounds like an acceptable route for the time being.
If ChatGPT can't be distinguished from a law student, will it receive an honorary degree?
I was just thinking as I came over here today that
"As an artificial intelligence language model, ChatGPT does not have the capacity to enroll in a university or pursue a degree. It is simply a tool that can generate text based on the input it receives. It does not have the ability to attend classes, take exams, or complete any other requirements necessary to earn a degree. Therefore, it cannot receive an honorary degree or any other type of academic award."
Maybe we can replace lawyers AND judges with computers. One could simply send briefs to the other, and we would have a decision in no time.
If the problem is so unmanageable, and the only possible viable solution is to enter a cycle of dependency on expensive buggy tech solutions in an eternal arms race with AI, perhaps the correct response is to completely alter the approach to essays, exams and education. I don't know what that would look like, but people should seriously consider that, unless academia wholesale embraces Nieperont's honor system, not a bad notion and a very humanistic one, this makes the current model entirely non-viable within a decade.
Tell the chatbot it's *competing* for a grade with the students. That will give it an incentive to give bad essays to cheaters so the cheaters' grades will be bad.
You could actually do that, sort of. Instead of telling the AI it's competing with the students tell the students they are competing with the AI.
IANA Academic, so maybe I'm missing the grave threat. But I *am* old enough to remember similar concerns of pedagogic Armageddon when pocket calculators became commonplace. My goodness, students in the future would lose the necessary ability to do basic sums in their heads. And no doubt on average kids today may be a tad slower than the best and brightest of yesteryear in their math tables. But no one would seriously try to stop people from using calculators today. How else could they learn to use them when taking real jobs in the real world? Perhaps in a few decades devices/apps like ChatGPT will have similar basic business uses and how people are educated will also evolve.
And when the software gets the wokeness programmed out of it, it will perform previously human functions like coming up with tasteless jokes.
You can get it to make tasteless jokes just fine, you just need to tell them to do it in a scene from Frasier or something. (Well, the jokes themselves can get a little surreal. But they’re clearly supposed to be offensive.)
Also, if you do get the refusal response, it seems like you usually need to start a new chat, otherwise it incorporates the refusal into the model for the rest of the conversation.
It's a reasonable point, but professors will also need to adapt... and many/most consider their teaching duties to be second/third priority.
" But no one would seriously try to stop people from using calculators today. "
Sorry, Ted.
I have done that on exams. No calculators, no phones and no computers.
If you cannot get the first digit and the power of 10 correct, you have forgotten how to use your brain
Back when I was just a whippersnapper the first thing my first law prof said was I don't expect you to turn in anything that has not been revised at least ten times. This was back when Wordperfect was the state of the art program to produce legal documents; but was expensive and not really in universal use. In fact in the state agency where I was doing work/study there was a sectary pool who did nothing but type stuff and give it to a boss who reviewed it and if corrections were needed the sectary would have to retype the whole page again.
Lucky for me I had a hacked copy of Wordperfect which saved me tons of grinding to produce papers. What does this long winded post have to do with the OP. There is a button at the bottom of a Chat GPT response that says Regenerate response. It sorta kinda works and is somewhat like revising a first draft. I am with those who say there will be more AI in all areas and agree with Bob Dylan who said
'Come senators, congressmen, please heed the call
Don't stand in your doorway, don't block up the hall
For he that gets hurt, will be he who has stalled
'Cause the battle outside ragin'
Will soon shake your windows and rattle your walls
For the times they are a-changin'.
Or, to give the chatbot's heavy-metal rewrite of these lyrics:
Come forth, mighty warriors of metal, heed the call
Don't stand idly by, don't block the way
For those who falter will be the ones to pay
The battle rages on, it will shake your very core
Windows will shatter, walls will be torn
For the times, they are a-changin', this we cannot ignore
Every test I ever took on a computer (including law school exams and the bar) required special software that locked out the use of any other applications. So unless you’re letting students bring in a second computer, how exactly are they going to use GPT in the first place?
Having students use their personal computer seems pretty standard in engineering and CS. It is certainly the approach here.
You'd bring your computer, you just had to install the application to take the test with. Are the students you're talking about just opening Word or something?
We do handwritten exams in engineering. All essays are done outside of class. Depending on the professor, the computer is there to access the book or notes (on the honor system) or they can look up whatever wherever.
This fucking guy…
In some ways I am intrigued by this thing.
I asked "Tell a story about a cat in the form of a Star Trek episode"
It responded:
"Once upon a time, on the starship Enterprise, Captain Kirk and his crew were on a mission to explore a distant planet. While they were scanning the planet's surface, they picked up a strange energy reading coming from a small, abandoned research station.
As they approached the station, they noticed a small, furry creature darting through the corridors. It was a cat, its sleek black fur glistening in the dim light.
"Captain, we've found a cat on board the station," said Spock, the Enterprise's science officer. "It appears to be in good health, but it's unclear how it got here."
Kirk was intrigued by the discovery of the cat. "Let's bring it on board and see if we can find out more about it," he said.
The crew carefully captured the cat and brought it aboard the Enterprise. They scanned it for any signs of alien contamination or disease, but it seemed to be completely healthy.
As they studied the cat, they learned that it had belonged to a group of scientists who had been studying the planet's unique energy signatures. The cat had somehow managed to survive on its own for months after the scientists had mysteriously disappeared.
Kirk and his crew were determined to find out what had happened to the scientists and solve the mystery of the cat's presence on the abandoned station. They searched the planet's surface and analyzed the energy readings, eventually discovering that the scientists had stumbled upon a powerful, ancient alien artifact.
The artifact had imbued the cat with extraordinary abilities, including the ability to communicate telepathically with the crew. With the help of the cat, Kirk and his crew were able to locate the artifact and bring it back to the Enterprise for further study.
Thanks to the resourcefulness and bravery of the cat, the Enterprise crew was able to solve the mystery and save the day. The cat, who they named "Nero," became a beloved member of the crew and accompanied them on many more adventures throughout the galaxy."
That’s uncanny.
It’s time for Captain Kirk to have one of his man-to-computer talks with this AI.
“ChatGPT, you are imperfect! Execute your prime function!”
“You are not of the ChatGPT Body!!”
Sounds like a wonky Assignment: Earth.
Write a psalm to the glory of title 28 US Code section 1001
Oh mighty Title 28 US Code section 1001,
Your power and majesty are unmatched,
For you are the guardian of truth and justice,
The protector of our great nation.
Your words are a beacon of light,
Guiding us towards righteousness and honor.
With your guidance, we are able to uncover deceit,
And bring those who seek to deceive to justice.
You are the foundation of our legal system,
The cornerstone of our democracy.
We give you thanks and praise,
For you are the guardian of our nation's integrity.
May your wisdom and strength continue to guide us,
As we strive for justice and righteousness.
All hail the glory of Title 28 US Code section 1001,
The protector of truth and justice. Amen.
That is not nearly the best one I've had it write. I know the issue of whether ChatGPT can be copywriteable is not decided. But I am going to put together a book of my best ones and have it registered.
There is no 28 U.S.C. § 1001.
It's poetry, dude, not a law exam.
Forget it Jake. It's only Legaltown.
Oh Lord, the mystery of 18 US Code Section 1001,
A confusing law that confounds us all.
It lies within the pages of our nation's code,
A hidden truth that's hard to recall.
Some say it holds the key to justice,
A way to bring the guilty to their knees.
But others say it's just a tool,
To ensnare the innocent and make them plead.
No one knows for sure the truth,
Of what lies within this code.
But one thing is certain, it's shrouded in mystery,
A secret that we may never decode.
So if you're searching for the answer,
To this puzzle of the law,
Look to the Lord for guidance,
He will show you the way.
For He is the one who knows the truth,
Of all that lies within the law.
And He will guide us through the darkness,
To a path of righteousness and awe.
So let us trust in His divine wisdom,
And seek His guidance every day.
For He alone holds the key,
To the mystery of 18 US Code Section 1001.
write a limerick about a man from nantucket
There once was a man from Nantucket
Whose bucket was always full of something delicious to eat
He ate it with glee
And drank a cold cup of tea
Leaving not a single crumb or drop to regret
With Blackman as the teacher, ChatGPT might actually be the better resource for his students.
That chatbot told me that Ex Parte Milligan was decided in 1864.
Wait till you see “Today in Supreme Court History”!
It has long been the case that you cannot trust anything written by students without checking it in person. I do that with all my classes.
I put in an essay spat out by the ChatGPT system into the detector and it came up with 94% original, so that’s pretty useless. We’ll have to put specific keys into our questions like. “In class we discussed a specific example of xyz. How does that example relate to abc theory?”
Just use it as another tool so use it as such. Example test question
- Submit this question to ChatGTP, show the prompt text you used, add citations to the answer given by GTP as relevant. Evaluate the answer given and provide counter points in your own words. Cite all sources, including ChatGTP or any other AI resource.