Artificial Intelligence (AI) is the latest major development in the field of education, and it has the potential to either improve educational quality or significantly devalue it. It all depends on how it is used.
In composition courses, using AI to produce outlines, take notes, or write, revise, or translate student writing undermines learning by allowing students to forgo essential steps in the writing process. Students commonly copy and paste writing prompts into ChatGPT and submit whatever it produces as their own work. This is no different from purchasing a paper that was written by someone else, and it should be treated as seriously as other forms of plagiarism.
Concerningly some professors have instructed students to use and cite AI as a source. This is not tenable because AI is neither reliable nor scholarly. Rather, the information synthesized by AI can be quite destructive. Gary Lieberman points out that AI has been responsible for false reports of sexual impropriety against real people as well as for false attribution of sources. Lieberman contends, “With an accuracy level hovering around 85 percent, one must consider ChatGPT as a non-scholarly resource at best and view it in the same light as we would a blog or a corporate website.” Teaching students to trust AI as a reliable source undermines the importance of teaching true academic research.
[RELATED: How Did AI Get So Biased in Favor of the Left?]
Furthermore, some universities have even adopted institution-wide AI policies that encourage the use of AI in the classroom, such as Laurie O. Campbell and Thomas D. Cox at the University of Central Florida, who developed recommendations for student AI use on written assignments. They suggested specific chatbots as tools to help students with parts of their papers, such as using Research Rabbit to help students write literature reviews, Quilbott to paraphrase the text, and GrammarlyGo to “summarize” or write in “your style.” The ethical problem with this guidance is that students are likely to use the tools to summarize source material rather than read and process it themselves. In addition, instructing students to teach AI to write in their own style trivializes the act of writing and encourages students to cheat. As Lieberman points out in a 2011 study, over 40 percent of students did not view disguising copied material through the use of a paraphrase tool as cheating.
Administrators are often quick to embrace AI as the “cutting edge” of writing instruction, remaining uncritical of the inherent ethical implications. While university-wide AI policies may seem to be the most effective way to communicate ethical use to students, this approach is misguided. The learning objectives in communication courses are vastly different from those in other types of courses, such as those in math. While it might be appropriate to use AI in a math class to help students recognize statistical trends in data, it may not be appropriate for students to use AI to organize research notes into an outline for a composition course. Therefore, AI policies should be course-specific and clearly communicated to students within each course. In addition to this, students should be informed that AI detection will be used to help instructors identify AI-generated text.
The belief that preventing or limiting AI use among students is impractical and the fear of false positives in AI detection are unfounded. Eighteen U.S. cities, including Los Angeles, Oakland, Seattle, and New York City, have banned ChatGPT in K-12 education. In addition, the detection tools used for AI-generated text are surprisingly accurate. Turnitin, for example, claims an accuracy rate of 98 percent in flagging papers when 20 percent or more of the text is AI-generated, which is a rate of accuracy that Mark Drozdowski, Ed.D. both tested and confirmed.
[RELATED: A Faculty Guide to AI Pedagogy and a Socratic Experiment]
Further assessing the accuracy rate of these tools, a study at Temple University examined the amount of AI generation in 30 original student-written papers submitted to Turnitin. The report labeled 28 of the papers as zero percent AI-generated, one paper flagged at 11 percent AI-generated, and one paper that could not be scored. Since Turnitin’s most recent version of AI detection has a threshold of 20 percent, there were effectively no false positives. However, the same study also showed that when papers were 100 percent AI-generated, the papers were flagged with 93 percent practical accuracy. Furthermore, when students used paraphrase tools to conceal AI use, Turnitin’s practical accuracy rate was still 90 percent. The study concluded that “Turnitin’s AI detector tool could be useful in situations where the use of AI was strictly prohibited.” This data shows that while AI detection will not catch every student who uses AI, it will catch at least 90 percent, and the chances of a false positive are low.
Since generative AI undermines the learning objectives in composition courses, its use should be forbidden unless the instructor specifically allows it. Such a policy will eliminate confusion over the ethical use of AI, but this can only happen if administrators resist the temptation to create universal policies that cover every course. Rather than requiring instructors to refer suspected AI use to an academic dean, instructors should be empowered to identify it and to respond accordingly. After all, academic deans rarely have the same relationship with students as the instructor, and the instructor is exposed to samples of the students’ writing throughout the course. This familiarity with the students’ work makes them more equipped to make sound judgments in these situations.
According to Lieberman, “the most common detection method for AI-generated work is the seasoned professor who reads a paper and immediately sees that the style is not that of the student’s normal work.” That being said, instructors should not rely solely on AI-detection software to determine whether a student has cheated. The instructor should compare previous writing and prewriting activities to support suspicion of unethical AI use. By empowering instructors to make decisions regarding AI and student honesty in their courses, administrators can reinforce high standards and accountability. Students who learn to write under these conditions will be better equipped for writing in future classes as well as in their professional roles.
Image: Nitcharee — Adobe Stock — Asset ID#: 746348935