Ethics and the Robot Revolution

Haleigh McGill

Haleigh McGill
Haleigh McGill

With robots edging closer to legitimately passing the Turing Test, automated phone services that now sound eerily human, and advances in robotics technology that may allow a robot to engage in logical and emotional reasoning, it is evident that we are in the midst of a technological revolution: the Rise of the Robot.

There are a lot of considerations that arise when thinking about the integration of robots into every-day society and their potential ability to make sense of emotions and circumstantial evidence. This hypothetical scenario leads to the question of the ethical treatment of robots; perhaps one of the more intriguing deliberations surrounding modern robotics. Would robots be protected under the Constitution, or granted the right to vote? Would unplugging a robot be considered murder? Could a robot make the decision to take a person off of life support? Do they deserve to be educated beyond what they are programmed to know, and if so, would the choice not to educate them be considered abuse or neglect? Could I take a robot to court? Could a robot take me to court?

Ad

According to a recent article from the New York Times, “A handful of experts in the emerging field of robot morality are trying to change [a robot’s inability to make moral decisions]. Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong.” Should these experts find success, robots could potentially have an intellectual complex very similar to that of a human. It isn’t impossible to design a robot to physically resemble a human, and if that were combined with an artificial intellectual complex, would we feel compelled to treat robots as humans? An article from Scientific American suggests that robots who closely imitate and look like humans may make people feel uncomfortable, depending on the job the robot is intended to do. This phenomenon is identified as the “uncanny valley“.

However, let’s assume that robots will look absolutely identical to humans and that they will be fully capable of moral reasoning and emotional intelligence. The ability to recognize and respond to emotions doesn’t necessarily mean the robots can actually feel. Just because they can make decisions based on right and wrong doesn’t mean the robots can feel the gravity of the consequences, even when the right decision is made. Every choice has a consequence, and one possible consequence of choosing to advance to and utilize this kind of technology is some level of chaos. The lines between human and machine would be irrevocably blurred, and with that transition comes a whole new set of rules.

Despite what could go wrong, and the limitations that exist even with the most advanced robotics technology, I believe we would have a responsibility to treat humanoids in an ethical manner. If we create robots to act and look like humans, we should be prepared to treat them like humans. Though I think there is much to be cautious of, I think it could be possible to live cohesively with human-like robots as long as expectations of behavior and boundaries are clear. However, whether or not the general population is ready to even imagine adjusting to that sort of every-day life is a whole other question when it comes to the Rise of the Robots.

Collegian Columnist Haleigh McGill can be reached at letters@collegian.com or on Twitter @HaleighMcGill.