As the influence of generative artificial intelligence expands, it can be difficult to grasp how social and ideological perceptions may change as a result. Already implemented into academic, social and professional settings, AI only appears to be gaining momentum. With little regulation, generative AI chatbots tend to reflect patterns of social bias and stereotypes. Biased data sets and algorithms alike contribute to this issue, which experts agree has potential for real-world harm.
According to the Pew Research Center, 50% of American adults think that AI will worsen a person’s ability to form meaningful relationships, while 5% think it will improve it. One aspect of this may stem from the fact that AI’s large language model has been designed to reinforce our own opinions and can contribute to false perceptions, which negatively impacts social interactions, said Taylor Sosa, a Colorado State University Ph.D. student in the political science department.
Sosa is currently studying how AI interacts with American politics and discussed how this technology is trained using existing information.
“A lot of the AI that we’re seeing nowadays is a lot of the chatbot functions, like the large language models,” Sosa said. “You have Gemini, you have Llama, you have ChatGPT (and) you have Grok, and all of these are trained on data that is brought in from somewhere. So we’re not seeing the training protocol, but all of the models are trained on text that was already written.”Â
The information given to AI isn’t always accurate or representative of what people actually think and experience, which means that the software can perpetuate harmful ideas that could otherwise be mitigated by real human connection and interaction.
“Being able to have these conversations with actual people is great,” Sosa said. “Whereas with an AI that gets cut down. You’re not looking at someone in the eyes. You’re not taking in their physical behavior. It’s very much one-sided.”
Sosa is conducting research with Matthew Hitt, associate director of research for the Institute for Research in the Social Sciences and associate professor in CSU’s political science department.
“You can think of (these large language models) as a sophisticated version of Google’s autocomplete function,” Hitt said. “What they’ve done is they have effectively scraped the entire internet. So every bit of written words that exist online exist now as a corpus of language.”
AI models don’t actually think for themselves, but rather produce information based on statistical likelihood.
“They have figured out that you can see there are probabilities associated with certain pairs of words and phrases and things like that,” Hitt said. “So if I search for peanut butter, that’s going to go with jelly.”
With such a wide set of data to pull from, Hitt said LLMs use complex and powerful algorithms that inevitably regurgitate biased information derived from the internet.
“If people produce racist, sexist, homophobic, ageist, transphobic — all of these harmful things — if people have produced that content, and of course they have, then that’s in the bag of words,” Hitt said. “There’s no reason to think that something that’s just mashing together language that humans have already produced would not also replicate the same social biases that exist in human interaction.”
Biases that come from AI can promote discrimination and a basis for poor decision making in the real world, according to the American Psychological Association.
This became apparent in Hitt and Sosa’s research on the Supreme Court and AI-generated summaries of the court’s decisions. AI was used to interpret the Supreme Court’s language, and then it was asked to explain the language at a simple reading level. Sosa and Hitt conducted experiments, asking participants to read one of three formats: an AI summary, the Supreme Court syllabus or a media-produced summary.
“We found that AI was good at increasing acceptance and agreement, but here’s where this gets really interesting: AI was worse at getting people to understand what the decision was really about,” Hitt said.
This sentiment represents a portion of the project’s findings, revealing the social biases that are often brought up in conversations about AI.
“It wants you to feel good. It wants you to keep asking questions and stay on the platform and keep interacting with it so, eventually, you find the model useful and pay $20 a month or more to use the thing.” –Matthew Hitt, CSU political science associate professor
Beyond the bias itself, AI can have impacts on relationships, particularly in regard to politics and personal beliefs.
“A lot of the research when we’re talking about especially politics, the only way we can overcome our differences is if we talk and we realize we’re just all human beings,” Sosa said. “When we have these face-to-face interactions, that builds those connections.
Additionally, AI platforms have an incentive to keep their users engaged and coming back, which is another area in which bias surfaces among AI tools. AI chatbots are very affirming of their users in order to encourage more use and, therefore, more revenue.
“This agreeableness is sort of baked in,” Hitt said. “It wants you to feel good. It wants you to keep asking questions and stay on the platform and keep interacting with it so, eventually, you find the model useful and pay $20 a month or more to use the thing.”
Though it is understood that AI exacerbates biases, it is difficult to address this issue.
“The extent to which we as consumers and people understand what texts (LLMs are) pulling from is not clear because AI is booming in terms of technology,” Sosa said. “The regulations that we have in this country to regulate AI are defunct because AI is growing so fast as a technology, and it’s similar for our research. So we have research on AI that is slower than how AI is moving.”
Sosa and Hitt said their research on the Supreme Court is far from done but could show some positivity bias with the AI-generated summaries.
“It’s all still very much a work in progress, but we were finding that people are able to take these SCOTUS summaries that are generated by AI and they agree and accept them, but they don’t understand what the decision was,” Sosa said.
AI was created to aid human productivity by providing responses to satisfy the user, which often means that models try to reflect the user’s personal views. This model, often referred to as the “social desirability bias,” can lead users to assume their views are always correct.
“People have different ideas of how they think of AI, whether it’s subjective or objective, but largely a lot of research is finding that people look at AI as if it’s objective,” Sosa said. “Ultimately, we have to understand all of the text is coming from prewritten information.”
The AI may produce a more agreeable response, but it may not be able to properly inform readers. The future of bias within AI remains in the air due to the fast-moving nature of the technology and industry. As advances are made, dilemmas surrounding the information it outputs will persist. Socially, the technology could impact our formation of personal opinions by reinforcing the values we already have.
“If you keep the human driving the bus and the human at the center, the human reviewing everything, and you understand that it’s just a cute bag of words and a statistical model underneath, if you can get to that, then you can try and harness it,” Hitt said.
Reach Gracie Douglas and Tobias Thomasson at life@collegian.com or on social media @RMCollegian.
