When I arrived at Boston College in the fall of 2023, I had all the typical first-year concerns swirling in my head: Would I find my classrooms without getting lost? How could I juggle a new social life alongside difficult coursework? What I did not expect was the collective anxiety around artificial intelligence. Just a few months before, OpenAI released ChatGPT 4.0, and suddenly a technology that once felt theoretical was right there in our hands.
On my first day, nearly every syllabus included a new policy about “Gen AI.” Many of these policies were vague or overly prohibitive, labeling AI use as “academic dishonesty” and threatening disciplinary action for “inappropriate” use. But what does inappropriate use really entail? Some professors banned AI for any purpose, even simple grammar checks, while others were unsure how to define acceptable usage. It quickly became obvious to me that many professors weren’t fully comfortable regulating a tool they didn’t completely understand.
Over the past year and a half, I interviewed more than 100 professors at BC and other universities in Boston to understand the root of their concerns about AI in education. Many of them expressed fears that AI could devolve into a “cheating machine,” allowing students to bypass the intellectual rigor of learning by outsourcing their thinking to a bot. They worried this reliance on AI might erode the skills students develop through grappling with challenging problems, engaging in critical analysis, and making mistakes along the way.
Others voiced concerns about how AI might foster a passive approach to learning by handing students answers on a silver platter. Take, for instance, a student who relies on AI to complete all their coding assignments. While the tasks might appear complete, if the student lacks a fundamental understanding of coding concepts, how can they assess whether the AI’s outputs are accurate or even functional? This blind trust in AI, without any critical engagement, is what alarmed many of the professors I spoke to.
Professors tried to contain their panic by turning to AI detection software like GPTZero and Turnitin. Some also experimented with programs that tracked students’ typing history to detect possible AI-generated content. But these measures unintentionally create a sense of unease. They make many students feel more like suspects than learners, and some respond by gaming the detection tools, intentionally inserting grammatical errors or quietly reworking AI-produced text so it appears authentic.
These restrictive AI policies transformed the student-professor relationship into a game of “catch me if you can,” largely because universities failed to involve students in the policymaking process. When I asked my peers how they used AI, I heard thoughtful answers rarely mentioned in faculty meetings. One friend converted dense academic articles into bite-sized audio snippets with NotebookLM, making her reading load far more manageable. Others used a chatbot to brainstorm essay prompts, aiming not to dodge the writing process, but to jumpstart their creative thinking.
By 2024, the narrative surrounding AI started to evolve. Media coverage became more balanced, moving beyond the “CheaterGPT” headlines. Universities formed AI task forces not to police AI use, but to teach students how to use it ethically and responsibly. Meanwhile, outdated practices rooted in the Industrial Revolution—like rote memorization and standardized testing—came under fresh scrutiny. After all, if AI could already outperform students at such tasks, what should learning look like? Are we preparing students for a future powered by AI, or leaving them behind to fail?
We can think about the evolution of learning through a simple example. In our parents’ generation, when someone wanted to learn a new English word or piece of information, the person had to flip through a physical dictionary or encyclopedia to find it. This process wasn’t just time-consuming—it also required a certain level of effort that reinforced the learning experience.
Then came the internet, and suddenly, knowledge became vastly more accessible. Instead of carrying heavy books, we turned to online resources like dictionary websites, Wikipedia, and Google Search. With just a few keystrokes, we could instantly find definitions, summaries, and more. This shift didn’t make learning less valuable—few would argue that flipping through physical pages is a high-level cognitive skill. Instead, the time once spent searching was redirected toward comprehension and actively applying the new information.
Just as information evolved from print to digital, AI is now reshaping learning itself. With each technological leap, humans adapt by streamlining tedious steps and shifting their focus to more meaningful aspects of acquiring knowledge.
Today, we’re at a similar turning point with AI. Rather than viewing it as a threat, we can embrace it as an opportunity to emphasize distinctly human qualities in learning—creativity and critical thinking—that AI can’t easily replicate. By doing so, we also normalize making mistakes as part of the learning process. After all, professors aren’t just looking for essays with impeccable sentences free of grammatical errors—they want to see students’ original thinking, which is inherently imperfect. AI, like past innovations, should serve as a tool to enhance and challenge existing learning methods, improving their efficiency and validity, rather than becoming a crutch that stifles genuine learning.
In 2025, I’ve noticed a growing shift among professors at BC toward embracing this mindset. The language and policies in the Generative AI sections of my syllabi now reflect this change, with statements like, “The careful use of AI tools may positively augment the learning experience,” emphasizing AI as a collaborative tool rather than something to be avoided.
The shift in universities from panic to partnership in AI is slowly taking hold, but sustaining it requires ongoing, structured conversations that actively include both faculty and students. Imagine if, instead of a blanket “no AI” statement on every syllabus, professors spent the first class asking students how they envision AI contributing to or detracting from their education. Such openness would transform AI from a perceived menace into a tool for critical engagement, where everyone is encouraged to explore best practices and ethical concerns together.
Ultimately, Boston College’s response to AI will shape whether it becomes a cheap shortcut or a powerful catalyst for learning.