Watson, named after IBM’s iconic founder Thomas J. Watson, is a project seven years in the making. Its DeepQA software powers its hundreds of simultaneous algorithmic calculations, which help the machine parse human speech patterns, check them against its vast database of knowledge, and provide a most likely answer and a confidence level for that answer. To run all those algorithms, Watson is powered by 90 32-core IBM Power 750 Express servers and 16 terabytes of memory.
In 2011, as a test of its abilities, Watson competed on the quiz show Jeopardy!, in the show's only human-versus-machine match-up. In a two-game, combined-point match, broadcast in three Jeopardy! episodes February 14–16, Watson bested Brad Rutter, the biggest all-time money winner on Jeopardy!, and Ken Jennings, the record holder for the longest championship streak.
Jeopardy! is an American quiz show featuring trivia in topics such as history, literature, the arts, pop culture, science and sports. The show has a unique answer-and-question format in which contestants are presented with clues in the form of answers, and must phrase their responses in question form.
Dr Ferrucci and his team have been using search, semantics and natural-language processing technologies to improve the way computers handle questions and answers in plain English. That is easier said than done. In parsing a question, a computer has to decide what is the verb, the subject, the object, the preposition as well as the object of the preposition. It must disambiguate words with multiple meanings, by taking into account any context it can recognise. When people talk among themselves, they bring so much contextual awareness to the conversation that answers become obvious. “The computer struggles with that,” says Dr Ferrucci.
Watson consistently outperformed its human opponents on the game's signaling device, but had trouble responding to a few categories, notably those having short clues containing only a few words. For each clue, Watson's three most probable responses were displayed by the television screen. Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage, including the full text of Wikipedia.
Watson was not connected to the Internet during the game.
When playing Jeopardy! all players, including Watson, had to wait until the host spoke each clue entirely, then a light was lit as a "ready" signal; the first to activate their buzzer button won the chance to respond.Watson received the clues as electronic texts at the same moment they were made visible to the human players. It would then parse the clues into different keywords and sentence fragments in order to find statistically related phrases.
Watson's main innovation was not in the creation of new algorithm for this operation but rather its ability to quickly execute thousands of proven language analysis algorithms simultaneously to find the correct answer. The more algorithms that find the same answer independently the more likely Watson is to be correct.
Once Watson has a small number of potential solutions it is able to check against its database to ascertain if the solution makes sense.
Because Watson's basic working principle is to parse keywords in a clue while searching for related terms as responses, the system offers several strengths and weaknesses when compared with a humanJeopardy! player.
Watson has deficiencies in understanding the contexts of the clues. As a result, human players usually generate responses faster than Watson, especially to short clues. Unlike a human player, Watson's programming prevents it from using the popular tactic of buzzing before it is sure of its response.
Watson has consistently better reaction time on the buzzer once it has generated a response, and is immune to human players' psychological tactics. Also, Watson could avoid the time-penalty for accidentally signalling too early, because it was electronically notified when to buzz, whereas the human contestants had to anticipate the right moment.
Does this that mean the end of the line for human dominance over mmachines? “Absolutely not,” says Oren Etzioni, director of the Turing Centre at the University of Washington in Seattle. But it does mean, he notes, that computers will be able to achieve vastly more than they can today. For a start, super-smart machines capable of answering questions in English (or any other natural language) will change search engines out of all recognition.
Long term, Watson’s progeny could help people sift through the thousands of possibilities they confront in their public and private lives, and come up with handfuls of appropriate recommendations—whether in medical diagnosis and treatment, legal precedents or investment opportunities. But we wil belive thatwhen we see it!