The College introduces Google AI chat feature, Gemini

Graphic by Mari Al Tayb ’26

By Emma Quirk ’26 & Genevieve Zahner ’26

News and Photos Editor | News Editor

On Aug. 20, the College announced the introduction of Gemini AI, Google’s AI chat application in an MHC: This Week email. Gemini, Google’s AI chat application, can be used “to explore ideas, draft or summarize writing, create content and images, and support research or creative projects.” In a follow-up email, this information was reiterated, with a reminder for students to “follow the guidelines set by faculty in their individual courses, and be transparent about the origin and process used for their submitted work.” Additionally, there was a note for all users to follow the Mount Holyoke College Guidelines for the Ethical Use of Generative AI.

These guidelines exist “to ensure the ethical, secure, and responsible use of AI, fostering a culture of critical engagement with technology in line with the College’s mission and strategic vision as we navigate these changes as a community.” They go into detail about various aspects of AI usage, including ethical use, culture of critical engagement, professional integrity and responsibility and data privacy and security. At the bottom of the page, it states that ChatGPT was used to create these guidelines, with “substantial editing” by Mount Holyoke faculty and staff.

LITS has been at the forefront of investigations about AI usage, as well as the integration of Gemini and other generative AI tools on campus. The LITS Advisory Committee, College Compliance Committee, Leadership Council, a faculty forum, and the Student Government Association Senate were all part of the decision to introduce Gemini. However, because of the focus on “the foundational technology environment, privacy, and information security of the College, I, as Chief Information Officer, made the decision,” Alex Wirth-Cauchon told Mount Holyoke News.

Multiple factors influenced this decision, including issues of privacy, equity and educational access. A significant concern was about AI tools mining information. “Sharing non-public information ... with such tools puts the community’s privacy and personal information security at risk,” Wirth-Cauchon said. “Our contract with Google prevents Gemini from using our information to train their model, advertising, or other uses.”

This became more pressing when Google started allowing college students to freely access Gemini for one year. However, this access “is not covered by the College’s contract that provides limits to what Google can do with the data submitted to Gemini,” Wirth-Cauchon said. “Additionally, we were concerned about the inequity created in a year when charges would begin for those accounts.”

There was interest from faculty and staff to have access to generative AI, either for course materials or various work as part of the College. Wirth-Cauchon stated, “Granting access to Gemini addressed this need without additional cost to the College, those departments, or the staff in those departments.”

In an email from President Danielle Holley on Sept. 3, she announced that there would be an AI Working Group sponsored by Provost Lisa Sullivan and Wirth-Cauchon. This committee will be split into sub-groups, including one connected with the Association of American Colleges and Universities — AAC&U — Institute on AI, Pedagogy, and the Curriculum. The AI Working Group will have faculty, staff and student members. Wirth-Cauchon said the purpose of this group is to “help us to broaden and deepen the community’s critical engagement with the impact of generative AI for the mission of the College.”

Vanessa Rosa, co-chair of critical race and political economy and associate professor in Latine studies, is part of Mount Holyoke’s AAC&U Institute on AI team. She attended a conference hosted by AAC&U over the summer, where she learned more about AI and about the Institute. The Institute is essentially a year-long mentorship program. “Each institution puts together a team that will go through this kind of structured program to really think through AI for their campus,” Rosa said in an interview with MHN. “Who is Mount Holyoke? What is our mission? What are our values? And how do we need to be thinking very carefully about AI in relation to those things.”

She spoke about the importance of learning about AI to understand it. “I think it's our responsibility to be educated and understand what AI is, what it does, and to interrogate what we don't know yet,” Rosa said. “My major concerns around AI are intellectual property, equity, [and] the environment.”

Angie Gregory, sustainability program manager for the Miller Worley Center for the Environment, is also focused on the environmental impacts of AI. Gregory recognizes that there are ways for AI to be useful, but encourages people to do their own research to learn about the consequences. Looking at studies from CNBC, University of Massachusetts Amherst and the United Nations can put the impacts into perspective, from “land use acquisition and development for the buildings that need to house these servers, to the amount of water that's used to cool these servers.”

“I think we as consumers of these technologies need to think about what the demand side is saying to those industries,” Gregory said. “So we can reduce our demand side and be intentional with when and how we use it.”

Despite some of the harmful consequences for people and the environment, she is not pessimistic about the future. “It can all feel really overwhelming and outside of our control,” Gregory said. “[But] we are on this campus together in an enclosed, kind of tight community where we have the opportunity to connect with all these individuals in real time ... I think there’s opportunity in that.”

Alex Moskowitz, an assistant professor of English at Mount Holyoke College, spoke in an interview with Mount Holyoke News about his thoughts of where AI fits into an English classroom, stating, “One of the things that distinguishes an English classroom at Mount Holyoke from an English classroom at other institutions like UMass, is that we have really, really small classes.” Most English classes at Mount Holyoke are capped between 16-18 students, creating a more personalized learning environment, as well as being primarily discussion based learning models. “One of the things you can do here is … you read the text, you come up with your ideas, and you speak about them in class, and you speak about them with your classmates, you speak about them with your professors. This is the work that is possible here. AI has no role,” Moskowitz said.

Moskowitz also spoke on how generations of knowledge for English students comes from reading, writing and discussion. He says speaking about literature with classmates offers new perspectives and brings up new ideas one never considered before, and AI prevents students from learning anything new. He additionally spoke about his policy in the classroom surrounding AI, and how he personally considers it to be a form of plagiarism. “I tell students they are not allowed to use it for whatever purpose … I want you to learn this thing, and you can't learn it through the use of AI … So it doesn't serve a pedagogical purpose in my courses, therefore don't use it,” he said.

Moskowitz also commented on the idea of Mount Holyoke trying to become more carbon neutral with efforts such as the geothermal project, but then adopting AI tools which are known to use immense amounts of energy. “Go look up the articles about what those data centers do to the communities that they're in, they're incredibly destructive, like the air quality, the water quality, everything. They destroy the immediate surroundings. And those communities are often Black and brown communities where those data centers are built,” he said.

Moskowitz wrapped up by encapsulating his philosophy on AI by saying, “There's more I could say, but there are these political and ethical and environmental reasons that AI is really, really deeply problematic, and so I'll say to students, don't use it, because pedagogically, it doesn't make sense.”

Mount Holyoke News also reached out to Mara Breen, a cognitive scientist and professor of psychology, to ask about how fields such as hers who use computational models of AI are working around the development of generative models. “So starting from the 1970s we had this term AI, artificial intelligence. Now what did it mean in 1971, [is] probably a little bit different from how we conceptualize it today,” she said, drawing the distinction between newer models such as ChatGPT and models used in labs. “I use various machine learning algorithms, which some people would call AI, but that's very different from these generative AI models like LLMs.”

Breen also spoke to how these models are used as learning tools. “As a cognitive scientist, I'm deeply interested in computational models as a tool of study, where we say, here's what we know humans do. What do computational models do?”

She explained how AI has a place in her classroom as computational models, however her thoughts on generative models in the classroom are slightly different, stating “The value of a scientific paper is not the abstract, right? Usually it's not. We're not reading a paper because of the abstract, we're saying, okay, but how did they operationalize their variables? What is the method that they used? What were their results? How did they interpret it?”

She also emphasized that the “potential benefit is not worth the cost” of using a generative model for simple tasks such as searching for an old email or redesigning a class.

Breen approaches AI in her classroom through education, and explaining to students how AI works and what exactly it is by comparing it to neural networks and models to help students grasp what she means. Additionally, she explains to students how the models they use in her field are helpful for offloading specific mathematical or experimental tasks, such as marking boundaries in research, but also teaching them about the harms of generative AI such as the labor exploitation and energy used in data centers. “We make the joke that [the brain] runs on, you know, coffee and Flaming Hot Cheetos, and GPT runs on all of the electricity in Texas,” she said.

Breen advises that before using AI, people should “make sure it's a reasoned choice,” and not to use it as a replacement for Google, or doing research by yourself, and to get educated on what exactly AI is and the effects.

Madeleine Diesl ’28 contributed fact checking.