Stand aside, Siri and Alexa. An IBM team led by artificial intelligence (A.I.) researcher Noam Slonim has devised a system that does not merely answer questions; it debates the questioners.
In a contest against champion human debaters, Slonim's Project Debater, which speaks with a female voice, impressed the judges. She didn't win, but that could change.
As her developers explain in a March Nature article, Project Debater's computational argumentation technology consists of four main modules. The argument mining module accesses 400 million recent newspaper articles. The argument knowledge base deploys general debating principles. The rebuttal module matches objections to the points made by the other side. The debate construction module filters and chooses the arguments deemed most relevant and persuasive.
Project Debater was paired with three champion human debaters in parliamentary-style public debates, with both sides offering four-minute opening statements, four-minute rebuttals, and two-minute closing statements. Each side got 15 minutes to prepare once the topic was chosen.
In one contest before a live audience, Project Debater went against 2016 World Universities Debating Championship grand finalist Harish Natarajan on the motion that the government should subsidize preschool. The YouTube video and transcript of the debate show Project Debater fluently marshaling an impressive amount of research data in support of that proposition. Natarajan largely counters with principled arguments, calling attention to opportunity costs (paying for this good thing means not paying for that other, perhaps better thing) and arguing that politics inevitably will target subsidies to favored groups.
That contrast is not surprising, since Project Debater had access to millions of articles during her 15 minutes of preparation, while Natarajan had to rely more on general principles. Slonim and his colleagues report that expert analysts, who read transcripts without knowing which side was human, thought that Project Debater gave a "decent performance" but that the human debaters generally were more persuasive.
An April Nature editorial, however, predicted that computational argumentation will improve. "One day," the journal suggested, such systems will be able to "create persuasive language with stronger oratorical ability and recourse to emotive appeals—both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims."
University of California, Berkeley A.I. expert Stuart Russell rightly tells Nature that people have the right to know whether they are interacting with a machine, especially when it is trying to influence them. Persuasion machine creators who conceal that fact should be held liable for any harm they cause.