The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"AI and the Death of Literary Criticism"
A very interesting piece by Prof. Thomas Balazs in Quillette. An excerpt:
When ChatGPT can analyse Hamlet as well as any grad student, we might reasonably ask, "What is the point of writing papers on Hamlet?" Literary analysis, after all, is not like building houses, feeding people, or practising medicine. Even compared to its sister disciplines in the humanities (e.g., history or philosophy) the study of literature serves little practical need. And, besides, when machines can build houses as easily as people, we won't need people to build houses either.
So, why do we teach English literature (or "language arts," as some secondary schools now call it) at all? According to the nineteenth-century British literary critic Mathew Arnold, the purpose of studying and teaching literature is "to know the best that is known and thought in the world, and by in its turn making this known, to create a current of true and fresh ideas." … English literature was, in truth, a substitute for religion. We wanted people to be good, but we no longer believed in God. Instead, we believed in Shakespeare, Milton, and eventually Toni Morrison. Until we didn't.
It's always been problematic, though, this idea that literature makes you a better person. Besides the obvious counterfactuals—the allied soldiers allegedly found copies of Johann Wolfgang von Goethe's works in the desk drawers of Nazi prison guards when they liberated the camps—there were the problems that always arise when you try to push your religion on other people.
Our religion was literature, and like any people of true faith, we deeply believed in it, thought it was essential, thought everyone must be saved through it. The remarkable thing was that we somehow convinced American college presidents of the idea, but then again, many of them, like University of Chicago president Robert Hutchins, creator of the "Common Core" and advocate of "Great Books," were members of the same religion. Not all countries make students of mathematics and engineering take literature courses, but in the United States we do. So for nearly a century, we evangelised our religion to college students, some of whom were already in love with reading and therefore happy to worship at the Temple of Literature. Many were not, but, nonetheless, we rammed Shakespeare, Herman Melville, and Toni Morrison down their throats—to make them better people.
That doesn't mean that it necessarily stayed with them…. Some students of the right temperament and with the right intellectual predilections are drawn to the Temple of Literature, but most are not. For most, it is like going to Sunday school—they endure it reluctantly and quickly forget any lessons learned.
But that's just an excerpt; here's the whole thing.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Isn't allowing students to cheat instead of reading books the main purpose of generative AI? Cheating sites have been high up in search results it seems like forever. Around 2000 I was in the audience at a legislative commitee hearing watching representatives of schools ask for the right to sue companies selling papers. The existing law was enforced by public officials who didn't care what the Internet had done.
Literature is God given talent on display. You care about the characters. You experience emotions, laughing, crying. You marvel at surprising and clever, unforeseen references to prior events. AI has no talent. Talent cannot be predicted and cannot be understood. AI has no sense of humor. AI cannot be surprised, nor can it marvel. The hardest literary feat is to make people laugh. That is not appreciated by prize givers. Literary types are pompous, pedantic, humorless wokes. They are annoying. That should not detract from the enjoyment of great human talent. Literary criticism requires a lesser talent as well, to be able to achieve these appreciations.
I don't know that ChatGPT is there yet, but most jokes do follow predictable formulas.
The same could be said about most writing. Even shakespeare has a lot of obvious artifice.
Computer programs could already compose things that looked like the boring parts of Mozart 30 years ago. Writing is certainly easier than music composition. I'd be surprised if AI writing wasn't at least as good as the boring parts of Shakespeare.
There are few modern works of writing that I've enjoyed that I could point to and doubt AI could ever create anything as good. Because we have to remember, AI is in its infancy. It's going to get better. And so much of our writing is formula anyway.
Some of us may remember Cliff Oates. How quaint they seem!
Euigene, again you have it backwards
here is arch-atheist Christopher Hitchens on the KJV Bible
"Hitchens writes:
Four hundred years ago, just as William Shakespeare was reaching the height of his powers and showing the new scope and variety of the English language, and just as “England” itself was becoming more of a nation-state and less an offshore dependency of Europe, an extraordinary committee of clergymen and scholars completed the task of rendering the Old and New Testaments into English, and claimed that the result was the “Authorized” or “King James” version. This was a fairly conservative attempt to stabilize the Crown and the kingdom, heal the breach between competing English and Scottish Christian sects, and bind the majesty of the King to his devout people. “The powers that be,” it had Saint Paul saying in his Epistle to the Romans, “are ordained of God.” This and other phrasings, not all of them so authoritarian and conformist, continue to echo in our language: “When I was a child, I spake as a child”; “Eat, drink, and be merry”; “From strength to strength”; “Grind the faces of the poor”; “salt of the earth”; “Our Father, which art in heaven.” It’s near impossible to imagine our idiom and vernacular, let alone our liturgy, without them. Not many committees in history have come up with such crystalline prose."
We used religion to give us literature. So says Ryken, NOrtrhop Frye , T S Elliot, C S Lewis
I kind of like the way Isaac D'Israeli put it, but AI tools will have to mull out the nuances in a future where we will have lost the capacity for critical thinking:
A predilection for some great author, among the vast number which must transiently occupy our attention, seems to be the happiest preservative for our taste: accustomed to that excellent author whom we have chosen for our favourite, we may in this intimacy possibly resemble him. It is to be feared that, if we do not form such a permanent attachment, we may be acquiring knowledge, while our enervated taste becomes less and less lively. Taste embalms the knowledge which otherwise cannot preserve itself. He who has long been intimate with one great author will always be found to be a formidable antagonist; he has saturated his mind with the excellences of genius; he has shaped his faculties insensibly to himself by his model, and he is like a man who ever sleeps in armour, ready at a moment! The old Latin proverb reminds us of this fact, Cave ab homine unius libri: Be cautious of the man of one book!
--- Curiosities of Literature Vol. III the man of one book
The purpose of literary studies in secondary and undergraduate studies is to deepen students' understanding of literature, and the essays they write are training exercises. The fact that a machine could write essays just as good is not relevant for this purpose, any more than the existence of calculators eliminates the utility of learning multiplication.
The purpose of graduate work in literary studies in an AI world can only be to train future teachers of undergraduates. But isn't that pretty much the case already? No one who is not him or herself a professional student of literature (i.e., a professor or a graduate student) or an undergraduate reads routine professional literary studies. Occasionally a brilliant mind like Harold Bloom produces something truly new, and that will continue, but the bulk of literature professors will function like club gold pros, and their critical essays are like the rounds that a club pro plays himself, for pleasure and to keep his skills sharp.
Remember the “Leave it to Beaver” episode where Beaver does his Book report on “The 3 Musketeers” based on the Ritz Brothers movie version instead of Dumas’s novel. Never got the idea of Book Reports anyway, “he’s more like the slightly better than average Gatsby”
Frank
https://upfront.scholastic.com/content/dam/classroom-magazines/upfront/issues/2023-24/051324/p24-cr-cartoons/PO1-UPF051324-CR.jpg
Anyone remember the books on Khan Noonian Singh’s desk in “The Wrath of Khan”?? I’ll start with Dante’s “Inferno” (nice summer read when you’re banished to Ceti Alpha V)
Frank
https://s3.amazonaws.com/lowres.cartoonstock.com/education-teaching-plagiarise-book_report-download-collated-student-aba0682_low.jpg
Brilliant and provocative essay by a guy who seems to know more about literature than about what to expect from AI, or at least from AI as it is now evolving. Two points:
1. It will be a long time—and given current AI algorithms, maybe never—before an AI critique of Moby Dick turns up anything insightful. Currently, one of my favorite sources of botched AI hilarity has been queries about Moby Dick. Something about that book boggles the algorithms. Irony is one obvious contributor. Moby Dick layers on irony like lasagna layers on cheese.
2. Regurgitation of training materials concatenated via statistical permutations does little or nothing to inform original thought. Original thought comes from some cognitive capacity still obscure, and apparently not modeled by current AI algorithms. For some reason—I have no idea why; I do not know anyone who does know—the writing process, as it unfolds, has capacity to test and improve original thought. Some presumptively brilliant ideas will write; some will not. That tells you something. That seems a benefit almost as mysterious as it is valuable. AI algorithms in present configurations seem unlikely to confer such a benefit.
Cognitive insight, thankfully, is not confined to concatenating rationalisms. Capacities to learn chess, or to do coding, seem categorically different than whatever it takes to think with imaginative insight. Were it otherwise, rationalism would prove a substitute for experience, and that has not happened.
When ChatGPT can analyze Hamlet as well as any grad student, we might reasonably ask, "What is the point of writing papers on Hamlet?"
Does the AI analyze Hamlet in any sense, or does it just compile bits of already published analyses? If the latter, which is my impression, it is not analyzing at all, but preparing a compendium of analyses. This generates no insights, no different perspectives, explores no subtleties, etc.
Why should a grad student write a paper on Hamlet? Well, the student might learn something - they are a student, after all - might sharpen their analytical skills, might - rarely - uncover a new point of interest.
Does the AI analyze Hamlet in any sense, or does it just compile bits of already published analyses? If the latter, which is my impression, it is not analyzing at all, but preparing a compendium of analyses.
Words are just objects to a LLM. Training that kind of model is teaching them to connect those objects in the right patterns. Like you think, they get trained using massive amounts of existing text in order for them to see what the 'right patterns' are. Then, when you give one a prompt such as, "Analyze the actions of Iago from Othello and compare and contrast theories about what his motivations were," it is going to look for essays written by literary critics and synthesize what those Shakespearean scholars had to say. It is not going to look at the play itself, because it wasn't tasked with doing that. So, I think that you are correct that it can't provide any new insights on what motivated Iago to act the way he did.
Note: When I said that I'm on the inside at a very low level, I mean that I have no expertise or understanding of the software engineering of AI models. I'm at a low level of QA for one of the larger LLMs out there. (When you see job advertisements for people to 'train' AIs, that's what I'm doing these days.)
JasonT20 — Gratuitous career advice. Approach with skepticism:
Keep at that QA for a bit. Pick up a some graphical and typographical expertise. Study Tufte, especially his first two books, for the former. Learn to code Python, not because it will prove an enduring career asset, but because it will enrich QA insight, it's not too hard to get the rudiments, and it will impress low-level managers to see it on your resume.
Then make it clear to superiors as you go that you intend a career as a senior QA manager, not as a software engineer. And especially not as an AI engineer. That is crucial. Either that or get out.
AI may not prove great at a lot of things, but it will likely prove ferocious at devouring its coders. And there will be an inexhaustible supply of foreigners signing up to be devoured, at relatively low wages.
Logic suggests really good QA guys ought to end up supervising other kinds of specialists, and get paid plenty to do it. I don't think the AI development companies yet give that the weight they will likely give it later. As usual, the coding specialists will stay oblivious, while boosterish business-trained managers overvalue their accomplishments, and underpay their efforts.
If after a few years, that does not seem to be working out, do not stick with it too long. Consider an early career change, if you are not yet too old to pull it off. I think QA expertise will likely generalize better, if there proves to be need to bail out of a career path in a turbulent or troubled industry.
On one forum I read, maybe this very one, someone said they took the concept of predictive modeling of sentences based on many examples of possible next words, and built an engine, and it sort of worked.
This would also explain why the first samples out there used incredible adjunct prompts like "...in the style of Shakespeare." Or Eugene Volokh, for that matter, for those who remember. Generate a response, then do a predictive translation pass but only through the body of work of the author, boom, in his style.
Cognitive insight, thankfully, is not confined to concatenating rationalisms. Capacities to learn chess, or to do coding, seem categorically different than whatever it takes to think with imaginative insight. Were it otherwise, rationalism would prove a substitute for experience, and that has not happened.
Being on the 'inside' of AI development at a very low level for the last several months has made me pretty sure that fears of the coming AI apocalypse are unjustified. AI will definitely be disruptive, even more than it already is, but Stephen has a good point here. It [a LLM] is still getting the mechanics of what it is supposed to do wrong fairly regularly. Eugene posts news of chatbot legal briefs being caught doing something laughable regularly, for instance. To think that AI is on the cusp of replacing human imagination and creativity is still science fiction.
Also, I can always count on Stephen to use (correctly) a word that is completely unfamiliar to me. Perhaps that is proof enough that he isn't secretly an AI chatbot.
There was an old AI project called Cyc. Its purpose was to grab the bull by the horns and try to create a brain database of general purpose understanding of the real world. AI long had a problem that what it reasoned about had precisely zero knowledge of the real world. It was "pushing symbols", with no understanding of any of it.
The guy had a team and they'd enter facts all day long, then let the engine think about it overnight. This went on for years.
One day they came in and asked it for any ideas, insights it had. It asked them a question:
"If I turn around, is what's behind me still there?"
That's peek-a-boo quantum leap level realization and understanding of the world.
Predictive AI will not be capable of this, because it's just, as AI elders once said, "pushing symbols".
You known what predictive AI is? It's Searle's Chinese room.
A guy sits in a sealed room. He doesn't understand Chinese. But people slip him papers with Chinese on it. He does have massive tomes about Chinese sentences, rules of grammar, ideas and thoughts. He pours through them and produces a response, which he slides out through the slot.
He still doesn't understand Chinese. But the question is, can the room as a whole be said to understand Chinese?
I'll bet
the sun would burn outthe last proton will have decayed in 10^^34 years before AI would have generated that connection, or any connections in...AI: "Ahhhh...James Burke, my nemesis! We meet again!"
Or that, for that matter.
I need to get paid for this shit.
No man but a blockhead ever wrote, except for money.
— Samuel Johnson
A lesson to us all, of course.
Mr. Keating of the Dead Poets Society would be appalled at this development. And to him i would say, this is a long overdue moment of self-reflection.
I don’t see anything particular to literature In this argument. In an era of Chat GPT, should people be educated at all? In anything? After all, nothjng meets the criteria set. If a subject has to make people good to be worth teaching, no subject will do that. Nor will any subject guarantee people will be wise, make good decisions, or any other unreachable demand one might make. If those are the criteria, all education is worthless.
But the question remains. Is Socrates dissatisfied better than a pig satisfied? It’s a value question that has no absolute answer. If the OP prefers living as a pig, I can’t answer him. No education can guarantee anyone will become Socrates. But without education, we can be pretty sure people will remain pigs.
Is there something to people that is more than ChatGPT, that cannot in fact be replicated by it? I suspect that to prefer a life as Socrates to life as a pig, one has to believe that that’s so.
Morality tales, such as Aesop's fables, are intended to impart wisdom.
Sports, in theory, are supposed to impart the value of fair play.
To guarantee people will be good, and not bad, that's more the role of religion and the criminal justice system.
Someone once said the purpose of religion was to keep you good when nobody was looking.
The sun will burn out before AI would think of that on its own.
They don’t guarantee it either. Humans just don’t come with guarantees. People have been complaining to the Manufacturer about it for millenia, but so far without much success.