As artificial intelligence outpaces regulation and research, an uncharted mix of technology and partisanship is beginning to reshape America’s political landscape, causing researchers to question how elections stand to be transformed.
President Donald Trump was the first candidate to leverage AI communication in the 2024 presidential election. Recently, the National Democratic Training Committee launched a course on using AI in future progressive campaigns. In July, the White House unveiled a plan to “(accelerate) innovation” in the AI industry; but in September, so did Democratic Gov. Gavin Newsom.
The increasingly prevalent and largely unstandardized use of AI — also referred to as large language models — has left researchers scrambling to understand even the most immediate political implications. Colorado State University political science professor Matthew Hitt said the technology’s rapid evolution has made it difficult to measure AI’s true influence on democracy.
“The AI tools that exist are rapidly increasing in sophistication as time goes by, and the election science of how AI interacts just can’t keep up,” Hitt said. “We’re really chasing a moving target, so a lot of work needs to be done. I think it’s going to be a few years before we have a real clear grip as social scientists on what the impact of AI on American politics and elections is.”
For now, studies give preliminary insight into how AI may alter the process of gathering information, a vital step to forming political attitudes.
“Depending on what politicians tell us and how they communicate to us, it’s going to inform how we feel about this. How political identity comes into this is something that’s still kind of bubbling and building up.” -Taylor Sosa, CSU doctoral student
Hitt’s research so far suggests that information synthesized by LLMs is generally more agreeable to the reader but does not necessarily improve understanding compared to human-written content. To test this, Hitt presented a mix of AI-written summaries and human-written summaries of Supreme Court cases to study participants.
“The AI summary tends to perform better in terms of people accepting and agreeing with the decision, even if it goes against their preexisting attitude or beliefs, which sort of gets at this positivity bias that we’re talking about,” Hitt said. “But it seems like people understand what the court did better when they read about it from journalists, not AI. That AI perhaps flattens out some of the details and doesn’t get things quite as right as a journalist would.”
Positivity bias — and bias in general — is a consistently recurring trait observed in LLMs. AI has been found to exhibit “humanlike social desirability” to maintain engagement, a bias that Hitt said has unclear political ramifications.
“One of the earliest use cases for these LLMs was to automate customer service, so they want to be pleasant, they want to be helpful (and) they want you to stay there and keep asking it questions,” Hitt said. “It could be the case that if you ask questions to an AI about politics, it’s going to feed you things what will either make things seem not so bad or keep you happy or reinforce whatever belief it is guessing that you’re coming from.”
Taylor Sosa, a CSU doctoral student and research partner of Hitt’s, pointed to AI’s more harmful pattern of reinforcing social and political bias, raising concerns about its capacity to generate impartiality in political analysis. An article published in the Proceedings of the National Academy of Sciences found that LLMs trained to be unbiased still made biased associations.
“We might think (AI is) objective, but it’s trained on very subjective material, and it can reproduce social biases like sexism and racism,” Sosa said.
For Hitt, the concern is not that AI will impact elections by persuading voters to abandon their preexisting beliefs, but rather that AI may deepen partisan thinking.
“I’m concerned that people tend to have curated information diets, so it’s not that someone’s reading the truth and someone’s reading lies; it’s more that (they are) reading from certain sources that are going to emphasize certain themes,” Hitt said. “AI has a potential to be — and I think probably already is — so hyperpersonal that it’s only going to exacerbate that trend.”
Still, the extent to which AI is used as an information tool depends on how much people trust it to aid in political decisions.
AI’s potential to sway voter behavior will likely become more apparent once the parties develop clear partisan stances, meaning trust will likely fall on partisan lines. Sosa hypothesized that the coming rhetoric, views and attitudes expressed by politicians will indicate AI’s future role in elections.
“Depending on what politicians tell us and how they communicate to us, it’s going to inform how we feel about this,” Sosa said. “How political identity comes into this is something that’s still kind of bubbling and building up.”
Today, there are still no clear federal or even broad ideological frameworks for how AI should be used in political processes, leaving future elections up to voters’ interpretations of truth.
“I would very much predict that there will be far more AI usage by politicians, which raises the question of how do we trust politicians?” Sosa said. “We want to trust our legislator, our lawmaker, our representative and trust that they’re telling us the truth. And when we have decreasing trust in our institutions and in our political figures, then it’s really hard to make those collective decisions and to trust our representatives to represent us.”
Reach Chloe Waskey at science@collegian.com or on social media @RMCollegian.
