Editor’s Note: All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board.
We are once again in an intense global race for automation and efficiency, with one culprit leading the charge: artificial intelligence.
American industries are all-in on AI. In 2024, the U.S. invested $109.1 billion in private AI, overshadowing China’s $9.3 billion and the United Kingdom’s $4.5 billion. With this level of investment, AI has trickled into almost every area of modern life; you can’t even search for something online without an unprompted AI overview popping up first.
With this, Colorado State University students need to monitor the use of AI in hiring over the coming years. According to the World Economic Forum, around “88% of companies already use some form of AI for initial candidate screening.” Many are replacing traditional assessment tools used by employers with AI-driven programs: AI-led interviews, textual analyses, coding evaluations and conversational chatbots — all of which speed up the hiring process, automate tasks and quantify your potential as an employee.
These AI bots are then often trained to identify talent signals or predictive markers of success, which serve as indicators of work-based potential to the employers. A study by SHRM Labs states that this move seeks to address “persistent talent shortage issues” and that “AI promises to be a game changer in providing data-based assistance for critical HR decisions.”
The use of AI in hiring may be efficient, as it cuts down the time and effort that it takes employers to sift through applications, schedule interviews and meet with prospective employees. However, there is one thing that AI cannot eliminate from the hiring process: human bias.
AI doesn’t create new data; it uses existing data already available on various online platforms, all of which has been created by humans. This existing data can be incomplete, poorly made or shaped by decades of exclusion and inequality.
AI does not eliminate human bias from the hiring process. Instead, it’s being trained to potentially perpetuate prejudice, lazy mistakes and assumptions that’ve been shaped by generations of decision-makers.
An algorithm based on resumes from past successful candidates, for example, could reproduce any past discriminatory decisions derived from candidates of a predominant gender, age, national origin, race or other group. In other words, AI bots could unintentionally disadvantage minority groups simply because they do not meet the “standards” it’s trained to look for.
And the discrimination through AI’s algorithmic decision-making and potential bias is a lived reality that many have already faced. In 2018, Amazon had to scrap its AI-driven recruitment tool after discovering it perpetuated gender biases. Their bot analyzed patterns in resumes submitted to the company over a 10-year period. Many of these came from men, reflecting the male-dominated nature of the tech industry. As a result, the bot was taught that male candidates were preferable, and it even penalized resumes that included the word “women.”
Hirevue, a video interview and assessment vendor, also came under scrutiny in 2020 due to rising concerns that its facial recognition and speech analysis software unfairly judged nonwhite and disabled candidates.
Most recently, a lawsuit was filed by a group of job applicants against Eightfold AI, a screening company whose AI-driven technology is used for hiring by many multi-million dollar companies, including Paypal and Microsoft. Notably, CSU announced last fall that it was partnering with Microsoft to integrate AI tools across CSU platforms.
There are many other cases like this where AI has penalized candidates due to factors like gaps in employment, divorce records, age or being a mother. Employers may be required by law to follow standards defined in Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act or the Americans with Disabilities Act, but unregulated AI-driven screening tools cannot always comply with these mandates.
As a result, many states are trying to implement regulations to address AI’s proliferation in the workplace. Illinois enacted a series of bills placing substantive limits on the use of AI. New York City even passed a local law a few years back that prohibits employers from using AI-driven tools unless the tool has undergone a “bias audit” to identify and measure any unfair outcomes.
While it is hopeful that local, state and federal governments are taking legal action to regulate the use of AI in hiring, employers should be the first to exercise caution.
Employers need to take steps to acknowledge the potential discriminatory impact of their own tools and question whether or not they inadvertently prioritize efficiency over inclusion. This could include utilizing bias audits, informing candidates of what will be measured with AI or allowing candidates to request accommodation or forgo the AI screening entirely.
The use of AI in hiring cannot be unregulated. If it is, AI threatens to undo any progress that’s been made to improve inclusivity and equity in hiring.
Reach Claire VanDeventer at letters@collegian.com or on social media @RMCollegian.
