Colorado State University launched CSU-GPT in October this year, an AI-driven chat program designed to provide comprehensive academic and administrative support. Amid varied attitudes toward AI use in academia, CSU also announced its plans to launch a second AI chat system called RamGPT, a more casual chatbot that is currently in its pilot phase and set to launch in spring 2026, as detailed during an all-day AI symposium held earlier this year.
According to the AI @ CSU webpage, CSU-GPT is a feature of the university’s partnership with Microsoft Azure, an AI-integrated version of the digital environment that CSU already utilizes. The chat-based AI model is powered by Microsoft’s NebulaONE, a large language model that integrates multiple AI systems to act as a proxy assistant for students and their individual goals. The LLM is governed by CSU’s Responsible AI guidelines, meaning information used to train CSU-GPT is privatized and not visible to anyone else.
CSU-GPT can be reached by CSU students who have the required credentials through the CSU AI Hub platform. The system differs from external OpenAI services like ChatGPT, largely because it integrates several Microsoft Azure products and follows protocols in line with CSU’s AI Governance Guidelines, which were created in part to lessen concern over AI use by students as well as educators.
Aaron Bauer, a senior instructional designer for CSU’s Office of Engagement and Extension, said he has experience with students and professors distrusting AI.
“In my experience, many people who are most wary of AI or think it is dangerous don’t have a lot of firsthand experience with AI tools,” Bauer said. “Strong AI literacy for faculty and students is an important step to overcoming AI suspicion. A solid understanding of what AI tools actually do and how they work is the best way to mitigate these types of issues.”
Digital literacy in academic spaces now includes recognizing AI and its implications, and some educators are working to refine AI detection tools to determine likelihood of AI-generated content. Bauer said there is a lack of integrity in those who freely claim AI content as their own.
“If there is a media outlet, educator, politician or even YouTuber using AI-generated content claiming it is authentic, that should be enough to immediately discount them,” Bauer said. “Integrity has always been an issue, but the prevalence of AI now just brings that issue to the forefront.”
Balancing ethical use of AI tools in the classroom is a process CSU has started to invest in with upcoming products like RamGPT, though the university has yet to release specific details on the new program. When asked about the evolution of these ethical guidelines in the classroom, Bauer said trust is an integral component.
“There has to be a level of trust between a teacher and a student in order for a classroom to be as effective as it can be,” Bauer said. “I highly advise all teachers have an AI-usage disclosure document that students can submit along with any assignment to disclose how and why they used any AI tools, with the understanding that a teacher will not punish them for the usage if they are honest about it.”
Ali Hasan, an associate professor and chair of the philosophy department at the University of Iowa, delivered a presentation at CSU regarding anthropomorphism and AI models. As a philosophy researcher, Hasan has expertise in ethical frameworks, personhood and the effects of more human features being added to AI models.
His lecture detailed how AI models have become increasingly entwined with therapy and simulations of friendship, placing some degree of increased scrutiny on the adoption of CSU-GPT for college students. Hasan suggested entrusting AI with a level of accountability that is greater than that expected of non-AI information tools but less than that of humans.
“There may be good reasons to treat AI as persons in that sense and say, ‘Well, we need to shut this AI down,’” Hasan said. “But there may be a way that the law can treat them as agents. Corporations are not moral patients; they’re moral agents.”
Joseph Brown, who serves as the director of the academic integrity program at the Institute for Learning and Teaching at CSU, has worked to create evolving ethical frameworks for teaching as well as learning methods with AI. He said transparency is a key part of maintaining ethical integrity.
“Until industry expectations solidify about how to acknowledge use and to separate one’s own ideas from co-authored ones, the consensus will be on transparency,” Brown said. “I’ve tried to model that at TILT by acknowledging the use of generative AI in resources we’ve built. By contrast, in situations where total authenticity is the expectation, I imagine we’ll see verification services erupt as a kind of quality control certification.”
Brown also noted the human value of authentic expression and said cultures value authenticity, usually wanting to know what is human created and what isn’t. He added that he does not foresee a future where an audience would not care what is AI-generated and what is human-created content.
Reach Colin Hoffman at news@collegian.com or on social media @RMCollegian.
