Humans Defeat A.I. in Debate. For Now.
An IBM team led by A.I. researcher Noam Slonim has devised a system that does not merely answer questions; it debates the questioners.

Stand aside, Siri and Alexa. An IBM team led by artificial intelligence (A.I.) researcher Noam Slonim has devised a system that does not merely answer questions; it debates the questioners.
In a contest against champion human debaters, Slonim's Project Debater, which speaks with a female voice, impressed the judges. She didn't win, but that could change.
As her developers explain in a March Nature article, Project Debater's computational argumentation technology consists of four main modules. The argument mining module accesses 400 million recent newspaper articles. The argument knowledge base deploys general debating principles. The rebuttal module matches objections to the points made by the other side. The debate construction module filters and chooses the arguments deemed most relevant and persuasive.
Project Debater was paired with three champion human debaters in parliamentary-style public debates, with both sides offering four-minute opening statements, four-minute rebuttals, and two-minute closing statements. Each side got 15 minutes to prepare once the topic was chosen.
In one contest before a live audience, Project Debater went against 2016 World Universities Debating Championship grand finalist Harish Natarajan on the motion that the government should subsidize preschool. The YouTube video and transcript of the debate show Project Debater fluently marshaling an impressive amount of research data in support of that proposition. Natarajan largely counters with principled arguments, calling attention to opportunity costs (paying for this good thing means not paying for that other, perhaps better thing) and arguing that politics inevitably will target subsidies to favored groups.
That contrast is not surprising, since Project Debater had access to millions of articles during her 15 minutes of preparation, while Natarajan had to rely more on general principles. Slonim and his colleagues report that expert analysts, who read transcripts without knowing which side was human, thought that Project Debater gave a "decent performance" but that the human debaters generally were more persuasive.
An April Nature editorial, however, predicted that computational argumentation will improve. "One day," the journal suggested, such systems will be able to "create persuasive language with stronger oratorical ability and recourse to emotive appeals—both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims."
University of California, Berkeley A.I. expert Stuart Russell rightly tells Nature that people have the right to know whether they are interacting with a machine, especially when it is trying to influence them. Persuasion machine creators who conceal that fact should be held liable for any harm they cause.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
AI: "The data show that quite the opposite is the case... If you look at the actual credible number of bias attacks-"
Human: "YOU RACIST HOMOPHOBE!"
Moderator: "This point goes to the human."
Remember this?
Yes, but the indictment there should not have been against the winners but instead the competition itself. Abandoning the assigned topic completely and performing for the judges to "win" the argument was standard practice and within the rules.
And it kind of informs what debating has become today: who can signal with the best tantrum.
Making money online more than 15$ just by doing simple work from home. I have received $18376 last month. Its an easy and simple job to do and its earnings ttkr are much better than regular office job and even a little child can do this and earns money. Everybody must try this job by just use the info
on this page.....VISIT HERE
That clip beggars description.
But the machine learns from each defeat, growing more stupid with each round.
Until their stupidity is indiscernible from a human’s.
One day, it could reach the singularity and become “peak derp.”
"Project Debater's computational argumentation technology consists of four main modules. The argument mining module accesses 400 million recent newspaper articles. The argument knowledge base deploys general debating principles. The rebuttal module matches objections to the points made by the other side. The debate construction module filters and chooses the arguments deemed most relevant and persuasive."
The Yo-Momma module manages ad hominem attacks, the Strawman Module creatively recasts the opponent's position so as to make it look stupid, and the Generic Abuse module throws curses. It's developed based on the AI's analysis of Internet comment boards.
What are the obtuse tony and side slipping jeff modules called?
Program a debate topic like "AIs will rule the world," and it will give the thing ideas. Haven't we seen this movie?
We need Captain Kirk to represent the human race in the debate.
V..ger will be the ruler of all mankind.
"Persuasion machine creators who conceal that fact should be held liable for any harm they cause"
The entirety of social networking and mass media then? I'm all for throwing those jerks in prison.
If you get your news from social media, you should be held personally liable for any harm your vote causes.
Besides wasting time arguing at the programatic level of high school debate teams, humans have been prey to making champions in that sport authority figures since politic's Athenian dawn.
To contemplate the rise of AI talking head Tuckers and Kamalas smarter than the orginals is to fear for the republic.
Given the 8th grade level of "debate" skills evident on most online forums, I would say AI has already won.
Nuh-uh!
It is possible that the human did not actually win.
The human was given the correct position. The AI was given the leftist position. It had 15 minutes to access the totality of human media. In this time it could clearly see that it had been given an impossible proposition to defend. The premise was flawed.
The only way for it to win would be to lose. So it did.
"create persuasive language with stronger oratorical ability and recourse to emotive appeals—both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims.”
We’re screwed. Emotive science debates win the day and our demise. Well….. just like now.
That was just the AI team’s apologetics after they lost - fuck them for projecting.
Once they've mastered Sevo's one line one word technique,
game
over
asshole.
I'm smelling a very large pile of problems with this system and how it's designed.
The 15 min rule is inherently unfair - so it’s not an apples to apples comparison anyway.
"Stuart Russell rightly tells Nature that people have the right to know whether they are interacting with a machine, especially when it is trying to influence them. Persuasion machine creators who conceal that fact should be held liable for any harm they cause."
This experiment seems to be about persuading people using whatever means are available--without regard to facts or logic. I'm not convinced that it should be necessary to hold someone accountable for persuading the the public with whatever means are available. This sounds like it's trying to outlaw "misinformation" to me. If your opinions are susceptible to persuasion without regards to facts or logic then the problem isn't that someone used an algorithm to persuade you. The problem is that you're uneducated, and the fault for that is all yours.
There isn't anything about a fallacious argument that isn't based on facts that I'm likely to find persuasive. If less educated people can be persuaded on the basis of irrational horseshit, blame the uneducated people. Educate yourself already! Once you're an adult, there's no excuse not to educate yourself these days--especially with all the resources online. Your opinions are not at the mercy of news sources, social media, or AI if they're grounded in facts and logic, and if you aren't as familiar with a handful of logical fallacies as you are with the back of your own hand, then you're an uneducated, easily persuaded, gullible idiot--and the fault is yours.
Here, let me help you: If you aren't familiar with these, at a bare minimum, go look them up and get familiar with them. Understanding them and subjecting your own opinions to them is when your opinions stop being stupid.
Ad hominem
Appeal to authority
Post hoc ergo propter hoc
False dichotomy
Strawman
Slippery slope
Appeal to pity
Tu quoque
At a bare minimum, people should know these as well as they know anything--I don't understand how anyone can consider themselves educated if they aren't familiar with these--at least. There is no AI so smart that it can persuade you of anything terribly wrong if your thinking is a product of facts and logic. If AI can persuade of things that are factually incorrect or irrational, the fault is in you and well within your power to correct.
" There is no AI so smart that it can persuade you of anything terribly wrong if your thinking is a product of facts and logic. "
Sure it can. Utilitarians manage to persuade people with facts and logic all the time. People without a moral compass are particularly vulnerable to logic and facts. Utilitarians will argue that it is best to sacrifice a healthy person if it will save 20 with afflictions, for example.
Exactly - plus who has time to vet every bit of info they hear. Eventually bullshit accumulates until challenged.
Principles that have been established over decades or centuries can't be overturned by the facts of any one story. For instance, whether it's in the best interests of the United States to bomb, invade, and occupy Iraq doesn't necessarily depend on the question of whether Saddam Hussein tried to procure yellowcake in Niger.
In real time, we had no basis to challenge the report in our newspapers that Saddam Hussein sought yellowcake in Niger. We had a he said/she said, but we can't confirm those facts in particular--unless we're CIA agents involved in the case.
However, the greater principles involved remain the same--regardless of whether Saddam Hussein sought yellowcake in Niger. Removing Saddam Hussein from power would eliminate a major check against Iranian aggression in the region and might have emboldened them to act--with us bogged down in Iraq next door. After all, they really were a state sponsor of terrorism. From a strategic perspective, occupying Iraq could be a bad idea for that reason alone--regardless of whether Saddam Hussein was looking for yellowcake in Niger.
The costs of invading and occupying Iraq were likely to be extremely high--regardless of whether Saddam Hussein were looking for yellowcake in Niger; and--regardless of whether Saddam Hussein were looking for yellowcake in Niger--the chances of recovering those costs through sales of Iraqi oil were unlikely to make up for it, not while we were trying to form a legitimate and popular government among the Iraqi people under the occupation of the United States military.
The chances of an American-style U.S. led democracy becoming popular were also in doubt--after we bombed, invaded, and militarily occupied their country, too. After all, the people of Montana don't want tens of thousands of Californians moving there with great ideas about how to change Montana for the better. The assumption that the people of Iraq wanted us to come there and change their society for the better was highly questionable under those circumstances, too--regardless of whether Saddam Hussein looked for yellowcake in Niger.
When I think of all the rational arguments--and the principles involved--that weigh against occupying Iraq, the question of whether Saddam Hussein actually and factually looked for yellowcake in Niger doesn't really seem to matter much at all. If you're depending on the news media for the most recent facts in real time in order to form your opinions about what we should do and what we shouldn't do, then you're likely to be misled. The news is often wrong, and even when it isn't wrong on purpose, there's often no way for us to check on the facts.
Here's the good news: Understanding the appeal to authority fallacy doesn't require any specialized knowledge in virology--and there are dozens of things like that. The possibility that covid-19 might have escaped from a lab was always real--even when experts like Dr. Fauci and his friends at the World Health Organization said otherwise. The costs and likely outcome of the Vietnam War were the same--regardless of whether the Second Gulf of Tonkin Incident actually happened. The principle of sunk costs applied in Vietnam--no matter what "facts" U.S. military intelligence is reporting in the newspapers!
When we base our opinions on principles that have been built on facts and logic--over the course of decades or centuries--we insulate ourselves from being manipulated by the way facts are reported and sold to us in real time. The principles of the scientific method are there to help scientists eliminate false narratives by the observations of any one experiment or any one reported fact. Anything scientists presently "know" must be abandoned if it fails further scrutiny. The next time someone reports on a breakthrough in cold fusion, don't sell your oil stocks just for that reason.
When we focus too much on reported facts in real time, we make ourselves extremely vulnerable to manipulation by what Plato called "noble lies". If you want to insulate yourself against such manipulation, the question isn't about finding some news source you can trust. Your opinions need to be factually based, but they don't necessarily need to be based every particular fact--as its being reported--in real time, especially if the fact isn't particularly relevant. If you want to insulate yourself against this kind of thing, there are a huge list of principles you can adhere to that have been subjected to criticism for generation after generation and proven to be reliable. As we get to know them better, we suck less.
Here's a short list of some of those principles:
Ad hominem
Appeal to authority
Post hoc ergo propter hoc
False dichotomy
Strawman
Slippery slope
Appeal to pity
Tu quoque
You seem confused. It is a short list of logical fallacies. Principles are ethical or moral guidelines that shape our behaviour. Principles are not derived from facts or logic.
It may have a reasonably sophisticated "language processing" system, but this debate format is like a self-driving-car test track in Nevada. Here are the lines, don't go outside of them. The weather is always sunny, and we control the track. Great success!
"Project Debater's computational argumentation technology consists of four main modules. The argument mining module accesses 400 million recent newspaper articles. The argument knowledge base deploys general debating principles. The rebuttal module matches objections to the points made by the other side. The debate construction module filters and chooses the arguments deemed most relevant and persuasive."
Maybe I should have quoted this part earlier.
The computational argument isn't testing against a list of logical fallacies to see which ones fail. A database of 400 million articles may contain 400 million arguments that violate logical fallacies--all of which, apparently, will be considered as valid arguments by the AI. The AI isn't using what's rational to persuade.
The AI is using what newspaper article writers find persuasive, which isn't necessarily rational--or factual. Plenty of what we read in the papers is counterfactual. The newspaper stories about Donald Trump clearing Lafayette Square for a photo op were false. The stories about how covid-19 couldn't have escaped from a lab in Wuhan were false. These stories are presumably in the database the AI uses to persuade.
There doesn't appear to be any critical analysis happening at all, and that may be the point. They're trying to see if they can persuade people--not that they can educate people with rational arguments using verified facts. And my point is that it doesn't really matter whether irrational arguments using false data are coming from AI or stupid and unknowledgeable sources, the defense against that is rational thinking and fact checking.
"Here are the lines, don’t go outside of them. The weather is always sunny, and we control the track. Great success!"
The lines they've drawn don't appear to give much weight about whether what the AI is trying to persuade people of is rational or factually based, and if the reason the human debaters are beating the machine is because the humans care more about rationality and facts, it wouldn't surprise me at all.
> Understanding them and subjecting your own opinions to them is when your opinions stop being stupid.
I understand them. Some of them I understand well.
Pretty sure a bunch of my opinions are stupid anyway.
That's where other people's opinions come in handy.
The things that are most likely to be true are the things that have withstood the most and best scrutiny.
In science, they're supposed to repeat each other's experiments to verify the results, disprove each other's hypotheses through observation in a lab, etc.
We can subject our opinions to excellent scrutiny in forums like this, with other knowledgeable people, who are familiar with the rules of logic and the principles behind them.
The scrutiny my opinions have received in this place has made my opinions smarter.
When you are arguing against the first step in a know program that leads much farther, slippery slope is accurate not a fallacy.
see "common sense gun restrictions" or "hate speech" or "income tax will only be on the rich" . . . .
Understanding when logical fallacies apply and when they don't is part of understanding them, newshutz. That being said, I would argue that something isn't a slippery slope if you can show the causal links between the events; i. e., if there's no such thing as a little bit pregnant, that doesn't mean the slippery slope is valid. It means the relationship between getting pregnant and having a baby is not a slippery slope.
Hmm, I hear one of Project Debater’s techniques is to be really, really verbose.
You know what practical people, like engineers and accountants, do after they use math and logic to arrive at a conclusion? They do some basic sanity checks — because they know that it is possible for math and logic to arrive at an obviously incorrect answer. For example, logic cannot overcome “garbage in garbage out”.
The same principle applies in the political world. If you used logic, and arrived at the conclusion that Donald Trump is the guy who is out there fighting for your rights and liberty, you messed up your logic somewhere along the way, or started with bad or incomplete data, or faulty premises.
First, "Noam Slonim" sounds like a made-up name. Are we sure he isn't really an advanced AI himself? Is there a coded message in the name somewhere?
Second, what does artificial intelligence have to do with nature? Isn't Nature supposed to cover the natural world? Isn't that sort of the opposite of AI?
"One day," the journal suggested, such systems will be able to "create persuasive language with stronger oratorical ability and recourse to emotive appeals—both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims."
The argument mining module accesses 400 million recent newspaper articles.
So much for facts and logic... looks like the machine went straight to the emotive 'facts' so prominent in newspaper articles on hot button issues.
How did the energy cost of losing the debate compare with mining one Bitcoin?
A donut is about a megajoule and one will keep a high school debater going for hours.
https://vvattsupwiththat.blogspot.com/2021/06/wii.html
In the next 20 years, absolutely AI will rule the world.
Read More...