Singularities Surround Us

Robotic domination in I, Robot

Ray Kurzweil's book The Singularity is Near is becoming something of a cult sensation. The 672-page paperback version of the book is ranked 1,494th on Amazon (on par with The Great Gatsby). Recently, Kurzweil announced a Google-backed Singularity University ($25,000 for a 9 week summer program; $12,000 for a 3 day "Executive Program"), lending a touch of academic rigor to an idea that has lived mostly in science fiction. For the time and budget conscious, a rash of Singularity-themed documentaries is now on the horizon.

The Singularity, as I understand it, is the point in time when computers will be smart enough to build even smarter computers, effectively removing humans from the design-build loop of Artificial Intelligence (AI). Kurzweil predicts 2050. That means I'll be 68 when the robots take over!

Predicting the future is no walk in the park, but when it comes to Artificial Intelligence, everyone's packing a lunch. So while I won't try to argue that Kurzweil is wrong (I think he is), it's good to place his predictions in the cultural history of wildly inaccurate AI speculation.

Consider these predictions, both made by outstanding computer scientists actively involved in AI research:

  • 1965, Herbert Simon: "machines will be capable, within twenty years, of doing any work a man can do."
  • 1970, Marvin Minsky: "In from three to eight years we will have a machine with the general intelligence of an average human being."

As it turned out, these claims were not even remotely true. In fact, the whole history of AI has been one of boom and bust cycles, the product of misplaced exuberant optimism.

Sponsored

Take, for example, the case of machine translation. During the Cold War, the problem of automatically translating intercepted Russian messages received considerable military funding. A 1954 Georgetown-IBM demonstration (translations of 49 chemistry-themed sentences with a 250-word vocabulary) captured public interest and spawned considerable investment, especially as the researchers claimed that the general translation problem would be solved in 3-5 years. When progress turned out to be much slower, funding was cut, and research all but stopped between 1965 and 1993.

Translation research has seen a significant resurgence, especially since I've been in graduate school (for computer science), mostly due to statistical methods. Rather than frame the translation of Russian into English as a series of rules (translate word R3 into word E3; switch the order of words E2 and E4; etc.) written by expert bilingual humans, research consists of building models trained from many examples of translated sentences (word R3 translates to word E3 with probability 0.6; word E3 appears after E2 with probability 0.2; etc.) so that the translation of a Russian sentence is the sequence of English words with the largest total probability, according to the model. The statistical approach is less ambitious-today's models are too simple to capture all of language's nuances-but far more successful.

Kurzweil's Singularity prediction is based on exponential growth. The idea is that because computers have been doubling in speed every two years or so (that's a factor of 1,000 in just 20 years; 1,000,000 in 40 years) huge paradigm shifts are actually quite close. But aside from the issue that computer chips have plateaued due to limits imposed by silicon's insulation ability and the speed of light (new computers have multiple CPUs), progress in automatic translation does not follow the law of exponential progress. Rather, there have been a few periods of dramatic improvement, followed by long periods of very gradual development. This is the trend for the majority of important AI problems.

So, while speculating about the future is both interesting and important, I'd be wary of anyone trying to sell you $12,000 of it.

37.762611 -122.409719