“Resistance is futile.” These are words I heard while attending a session on ethical use of AI in journalism at the Intercollegiate Broadcasting Systems conference this past weekend in New York City. We were told directly by the professionals brought in to talk about this topic that AI is inevitable, so there was no point resisting. This was after my friends were actively pushing back against some of their arguments for AI’s use in journalism. It felt targeted.
Conceptually, I don’t think AI is inherently bad or evil. Humans have long shown a drive to be continuously innovative — finding new ways to make our lives more efficient and easy. In some ways, I do think generative AI as we see it today was probably inevitable. But I don’t think that means we approach it with blind acceptance or surrender all resistance. Resistance is key to a functioning democracy, society and human race.There are countless ethical and logistical concerns about generative AI — perhaps too many to cover in this article, but there are a few I want to highlight.
The first is consent surrounding AI. Since 2023, Alex Reisner, a staff writer for The Atlantic, has been investigating the media that is being used to train the most popular AI models dubbed “AI Watchdog.” There are millions of videos, books, music, articles, art and other writing being used to train these models without consent of the original creator and in violation of copyright laws. While they claim to be learning from this content, generative AI is doing less learning and more copying.
I am worried about my work and writing being used to train AI. I have dozens of articles published online — none of which I have given my consent to be fed to AI models. The actual experience of writing is invaluable to me; whether it’s creative, reflective or news writing. I am proud of my writing and the work I am continuously putting into it and I want credit for my work.
AI is growing fast and it is everywhere. At the current rate, this growth is not sustainable. According to the United Nations Environment Programme, a search in ChatGPT uses 10 times the amount of electricity that a standard Google search does. With a high demand for new warehouses and buildings to run generative AI, one estimate said that in 2027, global AI infrastructure could consume up to 6.6 billion cubic meters of water — the same as roughly 35 million people.
Climate change is already doing irreparable damage and is disproportionately affecting places and people with the least amount of resources. I am concerned about the negative environmental impact of AI.
In February, a survey went out to the Goshen College student body from the new AI Taskforce asking about needs and usage around AI. While taking the survey, I felt that there was an assumption that whoever taking the survey would be pro-AI. There was only one open ended question, which asked how AI could be helpful to us as college students.
This prompted several conversations with friends: we are concerned that the negative impacts are being understated. That there is an assumption that we are all onboard with AI without hesitation.
A study from MIT showed that AI is rapidly increasing our reliance on technology. I am concerned that we will grow too dependent. I have watched “Wall-E” too many times to not be concerned.
It is increasingly more difficult to avoid AI. Even in classes, some of my peers are being required to use AI for assignments, or having their work put into AI by their professors without their consent.
I want to be able to opt out of using AI — in the classroom and out. I want my education to encourage critical thinking — not blind surrender to the “inevitability” of AI.
I want to write my own emails with personable salutations. I want to send poorly photoshopped photos to my friends. I want to tell compelling stories from my own experience with my own words. I want to learn the nitty-gritty of writing an article — to write the interview questions and come up with good headlines. I want to be creative!
I want to learn! I am here to learn from my professors and from my peers and from my co-curriculars. I want the human experience of learning and building relationships and having difficult conversations.
I have many concerns. How can I not?
While I cannot stop people from using AI, resistance is never futile — it is how we improve.
Mackenzie Miller, a junior communication major, is the digital editor at The Record, program director at The Globe, is a leader of Peace and Justice Collective and on the chapel singing team. Her favorite pastime is yapping with friends about topics as high stakes as AI, and as low stakes as “The Pitt.”



