The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

The Student News Site of Colorado State University

The Rocky Mountain Collegian

Print Edition
Letter to the editor submissions
Have a strong opinion about something happening on campus or in Fort Collins? Want to respond to an article written on The Collegian? Write a Letter to the Editor by following the guidelines here.
Follow Us on Twitter
Five Things We Learned Delivering Over Half a Million Orders for NoCo Restaurants
Five Things We Learned Delivering Over Half a Million Orders for NoCo Restaurants
November 8, 2023

  In May 2019, Nosh began as a humble restaurant co-op with just three people. When the pandemic hit in March 2020, while many businesses...

LTTE: It’s important to know of weaponized artificial intelligence

Editor’s Note: All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board. Letters to the Editor reflect the view of a member of the campus community and are submitted to the publication for approval.

To the Editor,


I am writing this essay to bring awareness and recognition to a fast-approaching topic in the field of military technology — weaponized artificial intelligence. 

Weaponized AI is any military technology that operates off a computer system that makes its own decisions. Simply put, anything that automatically decides a course of action against an enemy without human control would fall under this definition.

Weaponized AI is a perfect example of a sci-fi idea that has found its way into the real world and is not yet completely understood. This said, weaponized AI places global security at risk and must be recognized by institutions like Colorado State University before it becomes widely deployed on the battlefield.

Nations are constantly racing to employ the next best weapon as it is developed. AI is no exception. Currently, AI is responsible for the one of the largest technology competitions since that of nuclear weapons during the Cold War. At the top of this competition is China and the United States.

With little to no international restrictions on the deployment of AI weaponry, a modern “arms race” will continue to develop, creating tensions between world powers as fear of the opposing team reaching the “perfect” AI weapon arises.

The other inherent danger is the gap that is being created between advanced world powers and countries who are incapable of developing such technology. The tendency for global conflict to occur between these nations increases, as powers that wield weaponized AI have a distinct edge over countries that do not employ AI. This allows room for misuse of this power given the lack of international regulations on using this tech.

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat.”

Going further, my studies have shown that this technology poses considerable risk to international human rights laws. In its current state, weaponized AI is found to be unreliable in doing what it is intended to do. As an example, Project Maven, a current AI used by the United States, only identifies military threats using complex algorithms.

While this seems harmless, the direction in which the world is taking this technology is not. What would happen if this technology’s unreliability costs innocent lives due to a targeting error that AIs, like Project Maven, are prone to making? Likewise, who would take responsibility for the actions of a machine?

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat. These kinds of errors cannot be tolerated by the rules of modern warfare.


A final obstacle surrounding AI is the United Nations’ inability to come to a consensus on its use. Researcher Eugenio Garcia with the United Nations stated, “Advanced military powers remain circumspect (guarded) about introducing severe restrictions on the use of these technologies.”

Although people easily recognize the dangers that AI poses to national security, countries are not willing to restrict the development. Furthermore, with minimal current legislation on the unreliability of the technology, weaponized AI will move further than what we can control.

While I make these claims, one must recognize that the technology does offer the benefit of removing soldiers from the battlefield. However, nations around the world are not monitoring this rising issue.

Colorado State University, being a tier one research facility that has investment in military technology, will be the institution that does step up to the plate and recognize catastrophe before it happens. These threats to global security may not be present now, but if we do not advocate for international legislation, these dangers will become reality.


Thomas Marshall

Third-year mechanical engineering student at CSU

Working under Azer Yalin as an undergraduate research assistant exploring Air Force technology

The Collegian’s opinion desk can be reached at To submit a letter to the editor, please follow the guidelines at

Leave a Comment
More to Discover

Hey, thanks for visiting!
We’d like to ask you to please disable your ad blocker when looking at our site — advertising revenue directly supports our student journalists and allows us to bring you more content like this.

Comments (0)

When commenting on The Collegian’s website, please be respectful of others and their viewpoints. The Collegian reviews all comments and reserves the right to reject comments from the website. Comments including any of the following will not be accepted. 1. No language attacking a protected group, including slurs or other profane language directed at a person’s race, religion, gender, sexual orientation, social class, age, physical or mental disability, ethnicity or nationality. 2. No factually inaccurate information, including misleading statements or incorrect data. 3. No abusive language or harassment of Collegian writers, editors or other commenters. 4. No threatening language that includes but is not limited to language inciting violence against an individual or group of people. 5. No links.
All The Rocky Mountain Collegian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *