The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

Print Edition
Letter to the editor submissions
Have a strong opinion about something happening on campus or in Fort Collins? Want to respond to an article written on The Collegian? Write a Letter to the Editor by following the guidelines here.
Follow Us on Twitter
Crypto Exchange Listing: Types of Exchanges and Compliance Requirements
March 25, 2024

The crypto industry continues to evolve, fueled by the increasing institutional adoption of crypto. Today, numerous companies are entering the...

Internet censorship could help to filter terrorist threats: is it worth it?

In late 2015 Google’s Chairman Eric Schmidt posted an article on The New York Times’ website that called for an internet “spell-checker” of sorts, in relation to weeding out violent and malicious content on the internet.

This was in response to the growing utilization of the internet as a means to indoctrinate and spread fear amongst large groups of people, specifically ISIS as of late. Being that Schmidt has been at the forefront of the most popular high tech company out there, it’s easy to read his argument with assurance that what he is saying is valid. Schmidt is also responsible for his famous quote, “The Internet is the first thing that humanity has built that humanity doesn’t understand, the largest experiment in anarchy that we have ever had.” Clearly, man has some seniority and foresight on topics relating to the internet.

Ad

ISIS isn’t the only current offender on the internet in terms of spreading false information and terrorist ideals. Currently the Russians are encountering similar issues in regards to “trolls” that are inherently anti-western and against Democracy. However, in terms of the scale and effectiveness of online terrorism, ISIS is one of the top offenders. Using what a Popular Science article outlines as intimidation, coordination, reassurance and recruitment; ISIS has managed to create a sort of digital caliphate that keeps growing and adapting through marketing, propaganda and general utilization of nationalism as an end result of their efforts. With that in mind, the internet’s “spell-checker,” essentially a hate filter, that Schmidt is proposing isn’t at all that bad of an idea. If we can stop a popular maneuver that ISIS utilizes by hijacking hashtags on Twitter to spread images of decapitated images of prisoners, then why not do so?

Unfortunately the solution isn’t quite a matter of just capability, but of the woes of incorrect censorship as well. Schmidt utilizes an example in the article of a young girl who is surfing the internet on a tablet somewhere in rural Indonesia, to state that it is our duty to make the internet a safe place for her, and thus all children. However, the solution that Schmidt is presenting would be, in essence, censoring the internet for the betterment of our children and the human race as a whole. Albeit a noble idea in theory, but it presents many constraints in how it would play out for the users of the internet as a whole, who are by nature used to open communication.

Currently, the active filtration of offensive and terrorist-related material has been done by volunteering trusted flaggers on websites such as Youtube. Other attempts have either been achieved through private hacking groups, or by algorithms that are either based on text or image based searches. So unsurprisingly this type of censorship is already in motion, and is not entirely automated like what a “spell-checker” for hate and malice on the internet would provide.

Facebook is about the closest party that has implemented a sort of “spell-checker” for hateful and terrorist related topics on the social media website. Many major providers of social media such as Twitter and Facebook have already implemented ways to remove content based on service agreement violations, but another more automated approach has been utilized. This more automated approach is used to remove posts that are related terrorism and hate speech more popularly.

However, the algorithms have been deemed by Facebook’s own Mark Zuckerberg as not smart enough to squash out hateful or terror-based content. An article published on Gigaom discusses a similar issue, but is instead stating that history and vital information is being lost regarding the war in Syria. Primarily due to automated censorship, mounds of  important information and perspectives are being lost that could lead credible news sources towards better reporting on a world event such as those taking place in Syria. That is why an internet “spell-checker” that could supposedly eliminate all of the nasty things on the internet is currently not feasible. But that doesn’t mean that the idea couldn’t be reevaluated to include the use of algorithms employed by firms such as Facebook, cyber security groups and people in general to sift through the material.  Effectively such an effort could make the sifting process a little less impossible due to the large volume of data that is created by users on a daily basis.

Instead of deleting the material immediately, a uniform set of online infrastructure could be put in place to flag webpages and other content that appears hateful or terrorist related in nature. Similar provisions already exist for malicious webpages in general, but why couldn’t such infrastructure be implemented into a web browser such as Google Chrome by default? Then all three criterion that I mentioned above could easily be used to fight against a threat such as ISIS.

Instead of flat out censorship, let’s try and create a system that flags webpages and content using different colors of severity based on how threatening or hateful the material is. For any given post on Facebook, or Twitter, a dialogue box would pop up informing the user of the contents nature and a color coding system of green, orange and red would be used to gauge the offensiveness or hatefulness of the content. In essence, the system would allow for constant evaluation of content by algorithms, cyber security groups and users to determine if the content is a threat or not. This process wouldn’t eliminate content immediately, but would instead warn users of credibility and the overall malicious nature or whatever is being displayed.

Therefore a proper internet hate filter would not incite more censorship, but rather a better way to evaluate content and inform users such as young children of exactly what they are viewing. This could effectively weed out the effectiveness of a group such as ISIS without unecessary mass censorship.

Collegian Columnist Chad Earnest can be reached at letters@collegian.com, or on Twitter @churnest.

Ad

View Comments (9)
More to Discover

Comments (9)

When commenting on The Collegian’s website, please be respectful of others and their viewpoints. The Collegian reviews all comments and reserves the right to reject comments from the website. Comments including any of the following will not be accepted. 1. No language attacking a protected group, including slurs or other profane language directed at a person’s race, religion, gender, sexual orientation, social class, age, physical or mental disability, ethnicity or nationality. 2. No factually inaccurate information, including misleading statements or incorrect data. 3. No abusive language or harassment of Collegian writers, editors or other commenters. 4. No threatening language that includes but is not limited to language inciting violence against an individual or group of people. 5. No links.
All The Rocky Mountain Collegian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *