Photo courtesy of MHN
Angelina Godinez ’28
Section Editor
On Aug. 19, 2025, Mount Holyoke College made a public announcement to students that Gemini, Google's AI software, is now one of the many softwares available to Mount Holyoke students. This announcement instantaneously brought to surface the frustrated voices of students on campus. Within just a few days, flyers appeared around campus saying “Keep AI out of MHC. Support freedom of thought. Protect artists and scholars.” Despite Mount Holyoke’s administration’s controversial and contradictory compliance with artificial intelligence, it should be no surprise to students, as the College continues to use artificial intelligence through Workday, our campus hub for student and staff employment.
When attempting to apply to an on campus job students are pushed to complete what is known in Workday as a “quick apply,” where you upload your resume and experience will be uploaded automatically as opposed to manually entering it. In addition to the application process, AI is widely used in the hiring process making it almost impossible for students to get hired, whether they are on work study or not. If students don’t use nonsensical keywords that AI can understand, good luck getting hired. Last fall and spring semester I applied to over 25 campus jobs and despite qualifying for work study as a first-generation low-income student, it took months before finally finding a job.
Recently graduated alums face this issue as well, since AI is commonly being used to complete tasks that many entry level jobs may require. For instance, with one quick look on Workday, I found a position asking that students “collect data primarily through internet research,” a job that quickly becomes riddled with AI — specifically Gemini AI — as those using Google can’t avoid the AI-generated summary of several sources, which are often uncredited or fact checked. Not only does this pose a threat to job positions such as this one, but to all students in academia.
This is especially true at a liberal arts college that has made sustainability their mission. It is widely known that the creation and use of artificial intelligence has a negative impact on the environment, discrediting the College’s mission of carbon-neutrality by our bicentennial year of 2037. One does not cancel the other out. In addition to the use of AI being generally contradictory to the College’s mission of carbon neutrality, it is an outright insult to students who have purposely chosen to pursue higher education at a private liberal arts college where learning and curiosity should be our driving factor, uninterrupted by generative AI.
The College has tried to battle these accusations by insisting that “All users should continue to use AI in accordance with college policies and guidance including the MHC Guidelines for the Ethical Use of Generative AI.”
Within these guidelines the College makes an effort to acknowledge how AI is growing rapidly and argues that in academia it is best to get a grasp on it and its errors. They also note their awareness of its negative environmental impact and issues with privacy, security, academic integrity and equity, as AI is often coded with implicit biases against historically marginalized groups. This should be no surprise, as a majority of its creators are wealthy white men. These so-called guidelines then end with the cherry on top: they were created using ChatGPT. “These guidelines were developed by using ChatGPT to draw on best practices observed at peer institutions including Bucknell University, Wellesley College, and Iona University with substantial editing by faculty and staff at Mount Holyoke College,” the webpage states.
How can the College ask students for academic integrity when they can’t even create their own guidelines on AI usage without using AI?
This question of artificial intelligence in class leaves students wondering how Mount Holyoke intends to respect its students and honor the unique work of a gender diverse liberal arts institution. When using em-dashes is a so-called “tell” of AI, will every humanities and English major be flagged, considering how available the College has made AI for students? In published writings such as the newspaper, research papers and theses, will students continue to read beginning to end for information, or summarize with Gemini AI?
As of now there are over 150 signatures collected on a petition titled “Keep AI Out of MHC,” and no response from the administration on students’ concerns about their ChatGPT generated "guidelines." These petitions can be found on most billboards around campus, hoping to collect signatures from like-minded students.
Karishma Ramkarran ’27 provided fact-checking.