upper waypoint

Producer's Notes for Artificial Intelligence: Thinking Big

Save ArticleSave Article
Failed to save article

Please try again

The term "artificial intelligence", was coined in the summer of 1956, on the bucolic grounds of Dartmouth College in Hanover, New Hampshire. There, John McCarthy (who would later go on to teach at Stanford), Marvin Minsky, Claude Shannon, Nathan Rochester and six other conference participants came together to lay out the framework for this exciting new field which would "...find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." (McCarthy et al., 1955)

Though it was McCarthy who persuaded his nine other colleagues at the conference to adopt the term "artificial intelligence" to describe the nascent field, the seeds of artificial intelligence were planted earlier. Alan Turing, who was instrumental in breaking the German's Enigma code during WWII, published a paper in 1950 that laid out what came to be known as the "Turing Test:" if a machine could carry out a conversation with a human in such a sophisticated manner as to trick the human into thinking that he or she was conversing with another human, then the machine would have displayed true "intelligence."

But nearly 60 years later, the world still awaits a machine capable of exhibiting "general A.I.", instead of the "narrow A.I." demonstrated by IBM's chess-playing Deep Blue or Stanford University's Stanley, an autonomous robotic vehicle, or other impressive albeit limited applications of A.I. For example, Deep Blue may be able to beat Gary Kasparov at chess but can it beat a 10 year-old at a game of checkers? The lack of a general A.I. is made even more stark when juxtaposed with Moore's Law, a maxim that goes back to 1965 when Intel founder Gordon Moore postulated that the number of transistors on a computer chip would double roughly every 18 months.

There's even a term - "Singularity" - that is being used to describe the moment when technological progress will leapfrog and herald the creation of computers that not only achieve human-like intelligence, but also give rise to a progeny of computers who will be smarter then their digital forbears. Though he didn't coin the term (sci-fi writer Vernor Vinge did), the most famous exponent of this belief is inventor Ray Kurzweil. He places the Singularity as occurring sometime before 2050 and believes that with the advent of this unheralded technological progress, mankind may solve some of our society's most pressing ills, such as global warming, and even conquer death, by uploading one's consciousness into a virtual medium.

Though this seems a far stretch from engineering a domestic robot like Stanford's Artificial Intelligence Robot, top A.I. researchers like Stanford's Andrew Ng and Daphne Koller do believe that computing systems will some day be as smart or smarter than humans. When I spoke with Dharmendra Modha about his work into cognitive computing at IBM, he talked effusively about creating an "i-Brain," a digital accessory that people could carry around, making decisions and processing information like its human cousin. But if you're like me, and lament those moments when you've misplaced your keys or other instances of poor neural performance, you can't help but think that such a device can't arrive soon enough. On second thought, I'll wait until v2.0 hits the shelves.

Sponsored

And don't miss our Web Extra: A Dose of A.I. In this QUEST web exclusive, Stanford University computer science professor and artificial intelligence (A.I.) researcher Daphne Koller provides an elegant explanation of how A.I. can be employed in the examining room to diagnose a patient's illness more accurately than a human clinician. Find out more and learn how medical diagnosis is just the tip of the iceberg when it comes to tasks that rely on making sense of a sea of data to arrive at an informed conclusion.


lower waypoint
next waypoint