As computers grow ever closer to significantly passing the Turing test and the limits of artificial intelligence continue to be pushed, many important questions relating to the capabilities of artificial intelligence arise. One of the most pressing and more obvious of these questions is, ‘what happens when artificial intelligence can be consistently misidentified as organic intelligence or a human subject (i.e. passing the Turing test)?’
The concept behind the Turing test was developed in 1950 by computer scientist and mathematician Alan M. Turing. By the year 2000, he believed that “(computers) would be able to play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification (machine or human) after five minutes of questioning.”
Ad
Many will argue that this test has already been passed by fooling 33 percent of test judges earlier this year by a program that simulates a 13-year-old Ukrainian boy named “Eugene Goostman,” but according to an article from the Los Angeles Times, this alleged victory “was accepted at face value by tech journals and newspapers around the world, including The Times. But the truth is rather more modest … Turing did believe that a machine that could meet his specifications would do so with the breadth of language and ideas of a generally educated person. In that light, deliberately lowering the questioners’ expectations of the program, as Eugene’s creators did by posing it as a 13-year-old Odessa native, defeats the very purpose of the test. True artificial intelligence, which involves a process that Turing considered “learning,” may not be unattainable, or even far off. But ‘Eugene Goostman’ didn’t display it.”
Generally speaking, the idea of a robot being able to successfully pass as a human is exciting in a futuristic sense, but also a little off-putting and scary. Consider the movie “I, Robot“ and how the robots are initially viewed as helpful and practical innovations for every-day life. Everything is great until one of the robots violates the three laws of robotics, an occurrence that had been deemed impossible by their creator, and it is realized that these robots can be violent and nearly unstoppable. In line with this example, Elon Musk, CEO of Tesla and the founder of SpaceX, goes so far as to say “with artificial intelligence we are summoning the demon … In all those stories where there’s the guy with the pentagram and the holy water, it’s like ‘yeah, he’s sure he can control the demon.’ Didn’t work out.”
Stephen Hawking, esteemed physicist and professor at the University of Cambridge, commented on the subject of artificial intelligence in an article that he co-authored for the U.K Independent earlier this year.
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks … One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Whether the rapid progression of artificial intelligence is an innovation to be celebrated, a threat to be feared or a little bit of both, its implications and consequences are certainly something to consider, as co-existing with robots may be closer than we think.
Collegian Columnist Haleigh McGill can be reached at letters@collegian.com, or on Twitter @HaleighMcGill.