The rise of generative AI has sparked widespread ethical concerns among educators, with many fearing a surge in student cheating. Some instructors are banning the tools in classrooms and utilizing detection software to police student behavior, while others are deciding to quit.
We believe this approach is reactionary and overlooks a more pressing question: Should our traditional definition of academic integrity — the idea of fair play in the context of learning — still hold sway in the era of GenAI? Learning to work with the technology, rather than against it, is a more sustainable path forward.
Evidence suggests that plagiarism hasn’t significantly increased since GenAI tools became popular. Research from Stanford University indicates that cheating rates have actually remained stable, with the primary motivators for cheating being familiar issues like poor time management and overwhelming workloads, not access to AI technologies. Yet many educators report a growing distrust in their students’ work, which may indicate a deep-seated tension between longstanding notions of learning and the realities of today’s GenAI landscape.
Traditionally, education has been viewed as a unidirectional process, whereby knowledge is accumulated through individual effort. This model, rooted in the era of print, assumes that bypassing specific steps, like facing a blank page or drafting an outline, undermines authentic learning.
Correspondingly, assessment is understood to accurately represent student learning and, in turn, a student’s capabilities. GenAI challenges these assumptions by offering new ways to access and process information, similar to how every other educational technology — the printing press, word processors, hand calculators, and the internet — have impacted education.
Despite being deployed in nearly every discipline, the OpenAI release of ChatGPT in 2022 made writing the proving ground for GenAI in education. The ability to craft written documents from scratch — essays, reports, manuals, and narratives — has long been a cornerstone of liberal arts education.
While GenAI can now create coherent first drafts almost instantaneously, this doesn’t render the skill obsolete. Instead, it shifts the emphasis from initial composition to critical analysis, revision, and “prompt engineering” (the ability to effectively instruct AI systems).
Not only does this shift preserve opportunities for students to learn fundamental skills; it creates an environment in which new skills relevant to GenAI can emerge in supervised contexts — supervision that reinforces the notion that human expertise and oversight remain a vital component of responsible GenAI use.
In other words, core competencies like critical thinking, creativity, and ethical reasoning will not only remain relevant, but their application is likely to reinvigorate attention to teaching and raise important questions about how we understand pedagogy and assessment.
Spend your days with Hayes
Subscribe to our free Stephinitely newsletter
Columnist Stephanie Hayes will share thoughts, feelings and funny business with you every Monday.
You’re all signed up!
Want more of our free, weekly newsletters in your inbox? Let’s get started.
Explore all your options
The challenge for educators is to bridge the gap between traditional skills and emerging ones. Rather than banning GenAI outright and spending our time policing student behavior (a response that was more understandable two years ago), we should explore how the technology can enhance learning and advance human knowledge. Prohibiting students from using GenAI in their classes deprives them of the opportunity to develop the skills they will need to successfully utilize and evaluate GenAI outputs.
Some institutions are already leading the way. MIT has published guidelines encouraging faculty to incorporate AI tools responsibly into their curricula, and Stanford’s AI + Education initiative is developing resources to help educators integrate AI literacies into existing subjects.
In Florida and Mississippi, public and private institutions are collaborating to provide resources and guidance on how to reposition higher education to meet AI-related challenges and capitalize on new opportunities. Some of the topics being considered include AI fluency, identifying and managing deepfakes, and devising institutional policy to encourage thoughtful AI integration across curricula. Such approaches encourage faculty to revise and adjust legacy ideologies and methods rather than merely react to GenAI.
Transitioning to a new instructional paradigm won’t be easy. It requires rethinking assessment methods, updating academic integrity policies (not to mention the very notion of what constitutes academic integrity in light of these tools), and investing in faculty training. But clinging to outdated notions of learning while technology advances poses a greater risk.
So, is student use of AI cheating? Indeed, if that use violates an explicit policy prohibiting it. Our point is that, as educators, we must ask ourselves whether these policies actually serve our students, or whether they reflect an increasingly obsolete educational model.
From our perspective, the future of education isn’t in banning GenAI but in harnessing its potential to create more engaging, relevant, and equitable learning experiences. To that end, our understanding of academic integrity must evolve alongside the technology that is quickly reshaping our world.
Sid Dobrin, Ph.D., is a professor and chair of the Department of English at the University of Florida. Bruce Fraser, Ph.D., is the director of the Institute for Academic Excellence at Indian River State College.