Alli Adams
Editor’s Note: All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board.
We can’t put this genie back in the bottle.
Artificial intelligence isn’t going away. The technology exists now, and everyone knows. Even if one country limits its use, another won’t. So the question isn’t whether AI will be part of our future; it’s whether we will use it to make a future worth living in.
Meanwhile, our world burns. Fire seasons stretch longer each year, coastlines sink and storms intensify. We say “the future is here,” yet we ignore that this future is dangerous.
Despite creating a tool smart enough to help us survive, we ask AI what to pack for vacation or for opinions on our outfit’s “vibes.” We ask it questions we could Google and, sometimes, questions we could answer ourselves if we paused to think. According to OpenAI, 73% of all AI conversations are for nonwork use as of June. This isn’t necessarily laziness; convenience is the default setting of technology, but it limits priorities to a narrow worldview.
AI can help detect cancer, potentially generate protein binders — leading to cures we’ve chased for decades — and analyze natural disasters to help scientists understand the changing climate. This is not hypothetical; if applied properly, AI could help humanity survive.
But harnessing AI isn’t just about technical capability; it’s about values, which complicates the debate. In America, a capitalist democracy built on individual freedom, restricting access to AI goes against our nation’s principles. Our culture equates freedom with progress; that’s how we innovate. Around the world, governments ask similar questions: Who gets to use intelligence that no longer requires a human mind?
“The paradox is clear: If we govern AI with our values, we might not survive. If we govern AI with our survival, we might lose our values.”
Some argue that the solution is to consolidate AI’s power, reserving its highest capabilities for scientific research, environmental intervention and government planning. The Organisation for Economic Co-operation and Development highlighted that AI’s growing energy consumption contributes significantly to carbon emissions, water use and overall environmental impact, suggesting that prioritizing high‑impact tasks is necessary for sustainability. Why waste supercomputing energy on writing grocery lists when that same computer could stop droughts or deploy resources in disasters?
As AI systems become cheaper and more energy‑efficient, recent research showed overall energy consumption can increase. What looks like a “green” AI shift can be undone by scale: more users and more data centers. If we don’t curb usage and realign incentives, “sustainable AI” could remain a myth.
On paper, it sounds simple: Limit AI to where it matters most. But who decides what matters? Who determines which uses benefit humanity and which are trivial? Though centralized control may improve efficiency, it collides with values that define America. Removing technology from the public sector due to misalignment around “correct” usage is not a solution. Even defining important usage gets messy: Should we prioritize curing cancer or slowing climate change? Whose emergency comes first?
The uncomfortable truth remains: Unrestricted access means unrestricted waste. Data centers worldwide increasingly consume electricity as AI workloads expand, and generative‑AI operations require substantially more power than typical computing tasks. Every frivolous prompt is a tiny contribution to the crisis we expect technology to fix. Data centers don’t run on optimism. If our ideals accelerate the crisis, can we still defend them?
We love to pretend democracy is inherently sustainable, assuming it must be correct because it feels moral. But sustainability doesn’t care about moral satisfaction. Survival is about resource allocation, collaboration and sometimes restriction. In some countries, alignment can be forced. If a government decides every watt of computing should fight climate change, resistance isn’t an option. But if our government tried that, would we comply for the sake of survival, or would we insist on the freedoms that define us — our rights, autonomy, and ability to speak, act and choose — even at risk to the planet?
The paradox is clear: If we govern AI with our values, we might not survive. If we govern AI with our survival, we might lose our values.
Perhaps there is a middle ground. Maybe the answer isn’t banning public AI, but educating the public. Not restricting access, but redefining the purpose. Widespread technology requires widespread literacy, but will people listen? Do they care? And if we have to force awareness, is it morally correct? These are the questions that education alone cannot answer.
AI is not inherently good or evil, but it is a reflection of us. It magnifies what we ask of it. Right now, we ask very little, treating it like a personal errand assistant.
The future will be written by the choices we make today. AI could help us live longer, cure diseases and stabilize our planet, or it could choose our outfits. The difference is in whether we are brave enough to take our own survival seriously.
Reach Maci Lesh at letters@collegian.com or on social media @RMCollegian.