McGill: Humanistic robots — how close is too close?

Haleigh McGill

The Turing Test’s purpose has somewhat evolved since it’s original conception as the Imitation Game in 1950s, where a human tester at a remote location relative to a robot’s location would ask a series of questions, and, based on the answers, try to determine whether the respondent was human or computer within a set time frame.

In more recent years, it seems to have become a test meant to determine whether or not an entity with potential artificial intelligence (AI) could thoroughly imitate a human, not just convince someone over essentially a phone call that they are speaking with another human when actually they are not.

Ad

In the 2015 award-winning movie Ex Machina, an intricately impressive and eerily realistic AI — eerie in a way completely opposite of the Uncanny Valley — named “Ava” is subject to her creator’s idea of what the Turing Test should be: a series of face-to-face interactions between an AI and a human. Caleb Smith (Domhnall Gleeson), the human in this case, must determine if Ava has artificial intelligence based not only on her responses to questions, but also her mannerisms, body language and demeanor.

Ava’s creator, Nathan Bateman (Oscar Isaac), explains that Ava’s “brain” is programmed with exponential amounts of smartphone internet data collected via Blue Book, the world’s most popular search engine in this story that happens to be run by Bateman. To the knowledge of all cell phone service providers, he redirects their customers’ photos, searches, conversations — any concrete items of human interaction — to his own database in order to program a human-like, abstract consciousness. This allows Ava to recognize and understand human behaviors like facial expressions and micro-mannerisms, as well as more complicated territory such as how humans think as opposed to simply what they think.

So, if we approach the subject of artificial intelligence the same way Ex Machina, as well as many contemporary AI engineers and researchers are today, it shifts the focus of AI development from determining how closely a robot such as Ava should resemble a human to ease the weirdness and discomfort of interaction, to figuring out how to build an AI that is the closest it can possibly get to being human. Though a thrilling idea to entertain, research or explore on the big screen, it’s a dangerous direction.

There are four mental abilities that differentiate humans from animals, according to Marc Hauser, director of Harvard’s cognitive evolution lab. Those abilities include generative computation, promiscuous combination of ideas, mental symbols and abstract thought. In AI’s more humble beginning, the goal of creating a successful entity was to essentially align all four abilities to working order, separately contributing to the function of the whole. In this case, the “whole” is the robot’s proficiency to resemble a human. But now, newer questions arise. Does each ability have to operate separately from the others? Could a robot actually learn to intermix them?

In other words, humans don’t typically use those four abilities separately; they tend to influence the functions of one another. We don’t think in straight, clean-cut, robotic lines. Providing an AI with the opportunity to deviate from a more computerized, soulless pattern of information could quite possibly be the subtle beginning of our demise. Robots would then, in theory, be able to search for and comprehend the why behind the why. They would not only know the surface reasons why certain facial expressions, mannerisms or speech is coming from a human, they’d also be able to learn the more complex reasons why those three things are occurring the way they are in various situations.

AIs can learn and memorize that information much faster than humans can, mostly because they will always get it on the first try. So what if, upon the Rise of the Robot, they find they are dissatisfied with how we do things? What if they do not agree with how the world works, while having the ability to take that into their own hands?

Or, better question: what if we completely rule out building robots that look and behave like humans, and stick to constructing them for efficient, task-oriented performance? We should not build robots with the goal of humanizing them in any way. This would help to eliminate virtually all dooming possibilities and ethical concerns about how we’d have to treat these robots, as well as decrease the risks and consequences of a highly capable AI engineer on a power trip. Granted, that’s not the most exciting option, but when compared to the worst case scenarios of AI, it’s rather appealing.

The human brain is the most complicated piece of technology out there, and I think we’ve still got somewhat of a long while before that fact could be dethroned. I know that the vision of humanistic AIs living among us is a heavily ingrained, science fiction-made-real scenario for our future, but it doesn’t have to come to fruition. Can you imagine, though? If we allow that day to come — a temporary period of peaceful coexistence that dissolves into a new kind of war — we will be left with the currently unpredictable, likely morose aftermath. We could become obsolete.

In the dark, chillingly possible words of Nathan Bateman regarding AIs of Ava’s sophistication, “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa: an upright ape living in dust with crude language and tools, all set for extinction.”

Collegian Opinion Editor Haleigh McGill can be reached at letters@collegian.com, or on Twitter @HaleighMcGill.

Ad