LTTE: It’s important to know of weaponized artificial intelligence

Guest Author

Editor’s Note: All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board. Letters to the Editor reflect the view of a member of the campus community and are submitted to the publication for approval.

To the Editor,

Ad

I am writing this essay to bring awareness and recognition to a fast-approaching topic in the field of military technology — weaponized artificial intelligence. 

Weaponized AI is any military technology that operates off a computer system that makes its own decisions. Simply put, anything that automatically decides a course of action against an enemy without human control would fall under this definition.

Weaponized AI is a perfect example of a sci-fi idea that has found its way into the real world and is not yet completely understood. This said, weaponized AI places global security at risk and must be recognized by institutions like Colorado State University before it becomes widely deployed on the battlefield.

Nations are constantly racing to employ the next best weapon as it is developed. AI is no exception. Currently, AI is responsible for the one of the largest technology competitions since that of nuclear weapons during the Cold War. At the top of this competition is China and the United States.

With little to no international restrictions on the deployment of AI weaponry, a modern “arms race” will continue to develop, creating tensions between world powers as fear of the opposing team reaching the “perfect” AI weapon arises.

The other inherent danger is the gap that is being created between advanced world powers and countries who are incapable of developing such technology. The tendency for global conflict to occur between these nations increases, as powers that wield weaponized AI have a distinct edge over countries that do not employ AI. This allows room for misuse of this power given the lack of international regulations on using this tech.

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat.”

Going further, my studies have shown that this technology poses considerable risk to international human rights laws. In its current state, weaponized AI is found to be unreliable in doing what it is intended to do. As an example, Project Maven, a current AI used by the United States, only identifies military threats using complex algorithms.

While this seems harmless, the direction in which the world is taking this technology is not. What would happen if this technology’s unreliability costs innocent lives due to a targeting error that AIs, like Project Maven, are prone to making? Likewise, who would take responsibility for the actions of a machine?

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat. These kinds of errors cannot be tolerated by the rules of modern warfare.

A final obstacle surrounding AI is the United Nations’ inability to come to a consensus on its use. Researcher Eugenio Garcia with the United Nations stated, “Advanced military powers remain circumspect (guarded) about introducing severe restrictions on the use of these technologies.”

Ad

Although people easily recognize the dangers that AI poses to national security, countries are not willing to restrict the development. Furthermore, with minimal current legislation on the unreliability of the technology, weaponized AI will move further than what we can control.

While I make these claims, one must recognize that the technology does offer the benefit of removing soldiers from the battlefield. However, nations around the world are not monitoring this rising issue.

Colorado State University, being a tier one research facility that has investment in military technology, will be the institution that does step up to the plate and recognize catastrophe before it happens. These threats to global security may not be present now, but if we do not advocate for international legislation, these dangers will become reality.

Sincerely,

Thomas Marshall

Third-year mechanical engineering student at CSU

Working under Azer Yalin as an undergraduate research assistant exploring Air Force technology

The Collegian’s opinion desk can be reached at letters@collegian.com. To submit a letter to the editor, please follow the guidelines at collegian.com