After closing arguments it moved on to a second debate about telemedicine.
An IBM research team based in Israel began working on the project not long after IBM’s Watson computer beat two human quizmasters on a “Jeopardy” challenge in 2011.
But rather than just scanning a giant trove of data in search of factoids, IBM’s latest project taps into several more complex branches of AI. Search engine algorithms used by Google and Microsoft’s Bing use similar technology to digest and summarize written content and compose new paragraphs. Voice assistants such as Amazon’s Alexa rely on listening comprehension to answer questions posed by people. Google recently demonstrated an eerily human-like voice assistant that can call hair salons or restaurants to make appointments.
But IBM says it’s breaking new ground by creating a system that tackles deeper human practices of rhetoric and analysis, and how they’re used to discuss big questions whose answers aren’t always clear.
“If you think of the rules of debate, they’re far more open-ended than the rules of a board game,” said Ranit Aharonov, who manages the debater project.
IBM doesn’t try to declare a winner of the debates, but Noa Ovadia, one of the human debaters, said the computer was a formidable opponent even if it made a few too many blanket statements about space exploration being the pinnacle of human achievement.
Ovadia, a national debate champion in Israel, said she was impressed by its fluency in language and ability to construct sentences. She said the computer was able to “get to the bottom line of my arguments” and respond to them.
Among several outside experts IBM invited to attend Project Debater’s debut was Chris Reed, who directs the Centre for Argument Technology at the University of Dundee in Scotland. Reed said he was impressed by its grasp of “procatalepsis” — a rhetorical technique that involves anticipating an opponent’s argument and preemptively rebutting it.
As expected, the machine tends to be better than humans at bringing in numbers and other detailed supporting evidence. It’s also able to latch onto the most salient and attention-getting elements of an argument, and can even deliver some self-referential jokes about being a computer.
But it lacks tact, researchers said. Sometimes the jokes don’t come out right. And on Monday, some of the sources it cited — such as a German official and an Arab sheikh — didn’t seem particularly germane.
“Humans tend to be better at using more expressive language, more original language,” said Dario Gil, IBM’s vice president of AI research. “They bring in their own personal experience as a way to illustrate the point. The machine doesn’t live in the real world or have a life that it’s able to tap into.”