In November last year, a law student from a private university took his university to court for failing him in an assignment on the allegation that it was completed with the use of AI. His argument was largely that the university had not explicitly prohibited use of AI, illustrating the urgent need for a thought-out policy for use of AI in academic and research work.
On the one hand, the government has pushed for integration of artificial intelligence in higher education. On the other, growing use of AI tools by students has been a cause for concern as the distinction between assistance and plagiarism becomes hard to tell. Given all this, a comprehensive AI policy is found lacking in the domain of higher education.
The 2018 AI strategy paper by NITI Ayog talks about the potential of AI in “supplementing pedagogy”, facilitating lesson planning, customised learning content for multi-level classrooms, automating resource allocation, and so on. As far as usage by students is concerned, Intelligent Tutoring Systems are seen as the pathway. However, the way that Niti Aayog envisions customised learning, the assumption is that the tools will stay largely in the hands of teachers or administrators who will use AI for assistance. The use of freely available AI tools independently by students remains largely unaddressed.
The Indian Council for Medical Research is possibly the only significant national body in India so far which has laid down some guidelines on the ethical usage and application of AI.
Plagiarism and its detection
As such the University Grants Commission (UGC) adopted the ‘Promotion of Academic Integrity and Prevention of Plagiarism in Higher Education’ regulations in 2018 which set upper limits for how much percentage of the content can be copied, and how it should be acknowledged and referenced. But at the outset, detection itself remains a major challenge.
Vaibhav Narawade, president of Mumbai University College Teachers’ Association said, “AI tools are being used by students and teachers in a helpful way but the drawback is that students are increasingly using it to substitute original and creative thinking. We have some software that can detect plagiarism and AI use. But that is limited to thesis or dissertation and research submissions. The regular assignments and internal assessments are largely handwritten and there is no way to check those for plagiarism or AI use.”
Moreover, as generative AI gets better with time, plagiarism detection software like Turnitin have themselves acknowledged that there can be a possibility of inaccurately detecting AI-use.
Ethical considerations
Even when detected, the dilemmas are many. On the one hand, it is argued that the data generated from freely accessible tools is openly available so the individual using it should not be held accountable. On the other, the person using it is not the one who created it, so, strictly speaking, should not get the credit for it.
Apart from plagiarism, there are concerns that include but are not limited to the quality of academic output, perpetuation of inherent biases in AI, erosion of critical thinking skills in students, data privacy concerns and intellectual property rights. Prof Arun Tangirala, Dean of Competency and Outreach at IIT Tirupati, said, “While talking about ethics of AI usage by students, one will be remiss not to mention the ethical concerns that abound in the very development and maintenance of AI models. There are examples where individuals’ content on certain platforms has been used for developing AI models without any permissions even from the platform. So the questions of originality, copyright and so on are built into the tool as well.”
Apart from that, AI tools collect our data and given the lack of awareness around data privacy, our personal information is susceptible to misuse, he pointed out.
Comprehensive legislation at an international level to address these concerns or regulate the dynamic domain of generative AI and its consequences to intellectual property rights have themselves been at a nascent stage. So the confusions and gaps among academics and institutions about AI usage are but natural. In the meantime, some best practices have been adopted by some institutes or academics. “AI is here to stay, the question is how can we use it responsibly?” asks Prof. Tangirala.
For teachers like Manjula Venkataraghavan, Associate Professor at Manipal Institute of Communication, MAHE, transparency and a focus on responsible usage has been the way to go. She said, “As faculty, we are ourselves having workshops and training on how to leverage AI for academic as well as administrative purposes. Students will also inevitably use it. We use plagiarism-detection software but they are not foolproof and there are freely available tools which can help students humanise AI-generated data. So what we have been telling them is how to use it correctly, whether it is to structure their text or for a literature review in research.”
A team of two senior writing tutors and one fellow at the Centre for Writing and Communication of Ashoka University have been conducting a project assessing AI tools for academic writing in the field of humanities and social sciences. Neerav Dwivedi from the team said, “Since 2022, we’ve found a large dependence on AI tools amongst the student community. When interacting with students, we often emphasize the importance of including an AI information use statement where they declare how and why they have used AI for specific writing tasks. Sarah Eaton’s concept of post plagiarism is productive to think through these evolving concerns.”
Mr. Dwivedi’s team said in a collective statement said to use AI thoughtfully, one can ask clarifying questions, challenge assumptions, critique writing, and prompt deeper analysis and counterarguments. When used responsibly, it can enhance accessibility by assisting students—especially non-native English speakers—in refining clarity and structure. “For students not having proficiency in certain disciplines and familiarity with new topics, AI often serves as an accessible starting point for learning”, said the statement.
The team said the role if AI should remain that of a supplement rather than a substitute for intellectual engagement. “AI aids in synthesizing large volumes of literature, identifying key themes, and facilitating an iterative writing process through structured feedback and revision. By focusing on ethical use and integrating it as a supportive tool rather than a substitute for intellectual effort, academic institutions can harness its potential to enrich learning and research while maintaining academic integrity”, the statement said.
Cambridge International put out a policy statement on the use of generative AI in student work submitted for assessment stating, “AI to create or enhance student work without acknowledgement risks being classed as plagiarism and, like other forms of malpractice, may be subject to penalty.” They laid down that all use of generative AI programs to conduct initial research, create text, images, sound or video or plan a project must be acknowledged in the work and AI-generated material must be clearly referenced.
In Australia, institutes are slowly coming up with rules around this and some schools are saying that two things are necessary, shared Assistant Professor Tangirala. “One is declaring the tool that has been used. Second is the series of prompts that were given to arrive at that result. The logic here is transparency as well as demonstrating the way in which the AI-tool was used, which is also a skill.”
The adjacent question is that of the skill sets needed in the future, he stresses. “What we need is a new skill set. Before calculators were introduced in school exams, there were a certain kind of questions that could be asked. When calculators were allowed, students no longer needed to be tested on their computational abilities. Instead, we could now give more complex questions requiring students to think differently.”
So, instead of testing a student on their ability to write code, which AI can easily do now, we can test the student’s ability to understand and evaluate the best out of three programs performing the same task.
Published – February 01, 2025 03:28 pm IST