In May 2014, Cambridge University physicist Stephen Hawking penned an article that set out to sound the alarm about the dangers of rapidly advancing artificial intelligence.
Hawking, writing in the UK’s The Independent along with co-authors who included Max Tegmark and Nobel laureate Frank Wilczek, both physicists at MIT, as well as computer scientist Stuart Russell of the University of California, Berkeley, warned that the creation of a true thinking machine “would be the biggest event in human history.”
A computer that exceeded human-level intelligence might be capable of “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Dismissing all this as science fiction might well turn out to be “potentially our worst mistake in history.
“All the technologies we’ve seen so far employ what is categorized as specialized or “narrow” artificial intelligence.
Even IBM’s Watson, perhaps the most impressive demonstration of machine intelligence to date, doesn’t come close to anything that might reasonably be compared to general, human-like intelligence. Indeed, outside the realm of science fiction, all functional artificial intelligence technology is, in fact, narrow AI.
The quest to build a genuinely intelligent system—a machine that can conceive new ideas, demonstrate an awareness of its own existence, and carry on coherent conversations—remains the Holy Grail of artificial intelligence.
Now, it seems clear that the field has now acquired enormous momentum. In particular, the rise of companies like Google, Facebook, and Amazon has propelled a great deal of progress. Never before have such deep-pocketed corporations viewed artificial intelligence as absolutely central to their business models—and never before has AI research been positioned so close to the nexus of competition between such powerful entities.
A similar competitive dynamic is unfolding among nations. AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states. Indeed, an all-out AI arms race might well be looming in the near future. The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well.
If AI researchers do eventually manage to make the leap to AGI, there is little reason to believe that the result will be a machine that simply matches human-level intelligence. Once AGI is achieved, Moore’s Law alone would likely soon produce a computer that exceeded human intellectual capability.
A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us. Inevitably, we would soon share the planet with something entirely unprecedented: a genuinely alien—and superior—intellect.
If such an intelligence explosion were to occur, it would certainly have dramatic implications for humanity. Indeed, it might well spawn a wave of disruption that would scale across our entire civilization, let alone our economy. In the words of futurist and inventor Ray Kurzweil, it would “rupture the fabric of history” and usher in an event—or perhaps an era—that has come to be called “the Singularity.”
Thank You for reading!
The next Part will cover the issue of Singularity.