Editor’s Note: All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board.
Artificial intelligence is here; there’s no denying it.
Walk into any lecture hall in any university in America right now, and you will see AI lighting up more screens than ever.
Students are using these large language models to write papers, solve equations and even write basic emails. Any student or professor in higher education can testify: There isn’t a single aspect of academic life that AI hasn’t affected in some manner.
Is this a good outcome for education, however? If AI really is the future, shouldn’t its integration with education be seen as a positive?
While it’s uncertain what the future of AI will look like in modern society, it is overwhelmingly certain that generative AI is incompatible with education, and the time has come for universities as a whole to take a significantly harsher stance on its use.
If the purpose of education and its institutions is to provide an environment in which students are given the ability to learn and grow as thinkers, professionals and human beings, there could not be a greater obstruction than artificial intelligence in its current state.
To be educated is to have tried; it is to have done the work to create new neural pathways in one’s brain. Artificial intelligence doesn’t just impede learning, but it extirpates critical thinking in its entirety.
“AI is killing the diversity of thought that has pushed forward human progress for our entire history.”
In a 2025 study conducted by the Swiss Business School, researchers discovered “significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.” Cognitive offloading, the process by which one reduces the need for their own brain to accomplish tasks by transferring the mental labor to external resources, is a plague to education.
No matter the specific pedagogical approach, a key aspect of learning — especially in higher education — is the simple act of doing the work. When the brain is used, it forms new pathways to make solving difficult problems in the future easier.
Let us imagine a scenario in which a student has been asked to complete two tasks: task A, a check-the-box type task, and task B, a critical thinking task. After executing these tasks, the student’s brain has formed new neural pathways; the approach the student took to solving their assigned task has now been etched into their subconscious.
When the student is asked to solve task C — a hybrid check-the-box and critical thinking type assignment — the student isn’t starting from ground zero. By using the neural pathways formed from tasks A and B, they essentially have a road map for solving this new problem.
This, in its simplest sense, is the heartbeat of education: give students the road map to solve new problems. Math students are taught how the slope of a line functions so that when they walk into a calculus class they are capable of wrapping their head around derivatives; chemistry students are taught periodic table trends so they don’t walk out of the classroom when asked to draw Grignard reaction mechanisms.
But what happens if the student never did task A or task B? What happens if — with the help of generative AI — they get all the way to task Y before ever using their brain? Maybe they’re passing classes at an acceptable grade point average; maybe their professors haven’t caught on so they have no reason to change; or maybe they’re too scared to transfer that cognitive offloading back onto their brain. Now, when their brain is faced with task Z, the only solution they know is to plug it into AI.
If the math student never learned slopes and then never learned derivatives, how will they ever engage with modern mathematic theory in their graduate and post-grad work? If the chemistry student used ChatGPT to answer quiz questions on atomic radius or determine product structure for their reactions, what value could they ever provide a research laboratory asking for their skills in molecular synthesis?
This is not beneficial for society. If our next generation of thinkers, professionals and human beings can only accomplish tasks by plugging their dilemma into a large language model like ChatGPT, the ensuing result will be a homogenized, unintelligent mess.
A research study conducted by Cornell University earlier this year asked two groups of participants — one from the United States and the other from India — to complete “culturally grounded writing tasks with and without AI suggestions.” Participants were asked to write about topics like food or rituals and were randomly chosen to either write their essay completely organically or with assistance from ChatGPT-4o.
Through the study, researchers found that in comparison to the organically written work, the writing produced by the AI-assisted group significantly trended toward homogenization. In the organically produced work, participants’ answers were influenced by each individual’s background. The AI-assisted work, however, produced similar answers that diminished nuances and stifled cultural differences.
Education is supposed to be an environment where these differences flourish and sculpt the next generation. AI is killing the diversity of thought that has pushed forward human progress for our entire history. AI isn’t just antieducation; it’s anti-intellectualism. If every young student is taught to offload their cognition to make their life easier, we will never again see what the human mind can accomplish, only what the programmed one can.
The proliferation of AI does not only destroy education’s primary goal; it is also beginning to corrupt its secondary goal: the construction of academic community.
Higher education is not meant to be an solitary experience of solving tasks A-Z in a room by yourself. By nature, it is communal experience. Perhaps a student meets a group of friends in the first year of their undergraduate psychology degree. They work on schoolwork together, attend similar classes and join psychology honor society Psi Chi together. In doing this, the student has integrated into an academic community. This community, which is filled with like-minded individuals who have their own unique backgrounds and cognitive processes, is capable of a product far greater than the sum of their parts.
This experience, however, is fading. A recent study from Common Sense Media found that a third of teens use AI for social interaction and relationships and that 31% find “AI conversations to be as satisfying or more satisfying than human conversations.”
This is alarming. If instead of finding a friend group to undergo their academic journey with, the psychology student outsourced their social interaction to ChatGPT, that academic community would’ve never been created. AI, which students are increasingly relying upon for all manners of tasks, alienates young academics from their propective community and harms the potential of the next generation of academia.
What is to be done about this? With President Donald Trump’s recent executive order declaring it a priority for the United States to be a “global leader in AI” by “(revoking) certain existing AI policies and directives that act as barriers to American AI innovation,” it is clear these changes will not be coming from the federal level.
Instead, these changes will have to be enacted by the academic community itself. It is now time for faculty, administrations and students to act. Institutions should begin by blocking LLMs by WiFi firewall and enforcing punishment against AI usage as they would with academic dishonesty, professors must restructure their pedagogy to make the usage of AI fruitless and students must be willing to take a stand and take up that cognitive load once more.
Reach Willow Engle at letters@collegian.com or on social media @CSUCollegian.