The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

Print Edition
Letter to the editor submissions
Have a strong opinion about something happening on campus or in Fort Collins? Want to respond to an article written on The Collegian? Write a Letter to the Editor by following the guidelines here.
Follow Us on Twitter
Group of students participating in an art class while at school in the North East of England. They are using modelling clay and a teacher is helping them through the process.
The Influence of Art Education on Student Development
May 3, 2024

Education as a whole has always played an important role in the development and formation of the individual. Art education has an even greater...

The progression of artificial intelligence: To be feared or celebrated?

Haleigh McGill
Haleigh McGill

As computers grow ever closer to significantly passing the Turing test and the limits of artificial intelligence continue to be pushed, many important questions relating to the capabilities of artificial intelligence arise. One of the most pressing and more obvious of these questions is, ‘what happens when artificial intelligence can be consistently  misidentified as organic intelligence or a human subject (i.e. passing the Turing test)?’

The concept behind the Turing test was developed in 1950 by computer scientist and mathematician Alan M. Turing. By the year 2000, he believed that “(computers) would be able to play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification (machine or human) after five minutes of questioning.”

Ad

Many will argue that this test has already been passed by fooling 33 percent of test judges earlier this year by a program that simulates a 13-year-old Ukrainian boy named “Eugene Goostman,” but according to an article from the Los Angeles Times, this alleged victory “was accepted at face value by tech journals and newspapers around the world, including The Times. But the truth is rather more modest … Turing did believe that a machine that could meet his specifications would do so with the breadth of language and ideas of a generally educated person. In that light, deliberately lowering the questioners’ expectations of the program, as Eugene’s creators did by posing it as a 13-year-old Odessa native, defeats the very purpose of the test. True artificial intelligence, which involves a process that Turing considered “learning,” may not be unattainable, or even far off. But ‘Eugene Goostman’ didn’t display it.”

Generally speaking, the idea of a robot being able to successfully pass as a human is exciting in a futuristic sense, but also a little off-putting and scary. Consider the movie “I, Robot and how the robots are initially viewed as helpful and practical innovations for every-day life. Everything is great until one of the robots violates the three laws of robotics, an occurrence that had been deemed impossible by their creator, and it is realized that these robots can be violent and nearly unstoppable.  In line with this example, Elon Musk, CEO of Tesla and the founder of SpaceX, goes so far as to say “with artificial intelligence we are summoning the demon … In all those stories where there’s the guy with the pentagram and the holy water, it’s like ‘yeah, he’s sure he can control the demon.’ Didn’t work out.”

Stephen Hawking, esteemed physicist and professor at the University of Cambridge, commented on the subject of artificial intelligence in an article that he co-authored for the U.K Independent earlier this year.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks … One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Whether the rapid progression of artificial intelligence is an innovation to be celebrated, a threat to be feared or a little bit of both, its implications and consequences are certainly something to consider, as co-existing with robots may be closer than we think.

Collegian Columnist Haleigh McGill can be reached at letters@collegian.com, or on Twitter @HaleighMcGill.

Leave a Comment
More to Discover

Comments (0)

When commenting on The Collegian’s website, please be respectful of others and their viewpoints. The Collegian reviews all comments and reserves the right to reject comments from the website. Comments including any of the following will not be accepted. 1. No language attacking a protected group, including slurs or other profane language directed at a person’s race, religion, gender, sexual orientation, social class, age, physical or mental disability, ethnicity or nationality. 2. No factually inaccurate information, including misleading statements or incorrect data. 3. No abusive language or harassment of Collegian writers, editors or other commenters. 4. No threatening language that includes but is not limited to language inciting violence against an individual or group of people. 5. No links.
All The Rocky Mountain Collegian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *