OpenAI CEO Sam Altman wrote that soon “our children will have virtual tutors who can provide personalised instruction in any subject, in any language, and at whatever pace they need.” soon may be a lot closer than we think.
In a world where technology is transforming many industries, generative AI is becoming a significant tool in education. Generative AI refers to systems that can create “new” content by learning patterns from existing data. It offers anything from personalised learning experiences and automating administrative tasks to assisting in creating educational content.
In October 2024, the Department for Education published a policy paper setting out their position on the use of generative AI in the education sector, which has been informed by the government’s white paper announcing the setup of an expert Frontier AI Taskforce to help the UK adopt the next generation of safe AI. The policy paper covers various aspects of generative AI, including its opportunities, effective use, limitations, protection of data, safeguarding of pupils and staff, formal assessments and the necessary knowledge and skills for the future. In January 2024, The Open Innovation Team and Department for Education published a paper on generative AI in education, including educator and expert reviews. This article aims to explore some of the areas these papers cover, focusing on the benefits and considerations.
Benefits of generative AI in education
Students
Generative AI is often referred to as a “transformative technology” and students are leveraging it as part of their educational journeys. In a global survey of students conducted this year, 86% reported using AI tools in their schoolwork. Nearly a fourth of them used it daily.
Another survey found that most higher education students use AI tools for resume and cover letter writing. Two-thirds also use AI for writing assistance, personalised recommendations and research. Overall, 86% of students globally disclose that they use AI in their schoolwork.
Generative AI is being used for homework, revision and personalised learning through free and widely accessible tools like ChatGPT and Memrise. This is to the detriment of paid-for companies, who are victims of this successful rise in generative AI. For example, Chegg, a US-based online education company, has faced significant challenges. Since ChatGPT’s launch, Chegg has lost more than half a million subscribers who pay up to $19.95 per month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing approximately $14.5bn in market value. Although Chegg built its own AI products, the company is struggling to convince customers and investors of its value in a market upended by ChatGPT.
Educators
By November 2023, 42% of primary and secondary school teachers had used generative AI in their role, up from 17% in April. AI can and does help teachers save time by creating educational resources, planning lessons and handling administrative tasks. It can also automate repetitive tasks, such as grading, freeing up educators to focus on teaching. In December 2024, the Algorithmic Transparency Recording Standard Hub published Oak National Academy’s, Aila, which is an AI lesson assistant designed to help UK teachers create personalised lesson resources, aiming to reduce teacher workload. Using Aila, teachers can generate lesson plans by interacting through a chat interface. Additionally, generative AI can support the 1.6 million pupils in England with special educational needs by personalising learning materials.
Generative AI can also assist in evaluating the training requirements of the educators themselves, ensuring that any deficiencies are addressed with suitable training programmes to maximise their effectiveness.
Challenges and considerations of generative AI in education
Navigating the risks of the technology is something educational institutions will need to balance if they are to reap the benefits and boost growth and innovation in the industry.
Misinformation and disinformation
Blindly relying on AI without human oversight is a risk. AI tools can generate false or misleading information that is not always obvious. There’s a risk that users may unknowingly trust incorrect outputs and make decisions based on this information. Therefore, transparency and human feedback are crucial.
Data management
Educational institutions hold large amounts of sensitive data. Licensing a third-party solution may pose risks regarding data availability, privacy and security. Ensuring data safety is crucial. Even if the data is not sensitive, using an institution’s data in a third-party solution could influence other models or be used elsewhere.
Copyright is not only a consideration for learners but also for educational institutions using the technology to generate content. Since generative AI tools are trained on existing available data, the likelihood of content being plagiarised is high. The onus falls on the human users to sensibly use the content generated.
Ethics
Generative AI models can inherit biases present in the training data, potentially leading to discriminatory or unfair outcomes. Legally, the use of biased AI tools in an onboarding scenario, for example, can lead to potential discrimination claims if decisions like candidate screening are influenced by built-in biases. Educational institutions will need to ensure that there is human oversight where AI is being used to inform decision-making to question the output and avoid, as much as possible, any inherent biases.
Exams and assessments
The Joint Council for Qualifications has published a paper on AI use in assessments, which outlines key requirements that teachers must:
- Only accept students’ own work.
- Identify AI-generated content and understand it will not meet the marking criteria.
- Investigate any doubts about the authenticity of work.
Illustrating the challenges educational institutions encounter in balancing AI literacy and academic integrity, a recent case involving a university student who used AI to complete an essay resulted in an academic misconduct investigation and a warning. Another case, in Missouri, USA, involved a student who had an assignment flagged as AI-generated by AI-detection tools. The student, who has autism spectrum disorder, argued that her writing style is formulaic, which potentially led to the AI-detection tool inaccurately flagging her assignment. This argument was accepted by the university. AI detectors can have significant error rates, leading to debates about their accuracy and fairness. The situation underscores the need for a balanced approach.
Fraud and cyber security
The days of poorly written scam emails from pretend princes are long gone. Criminals are now leveraging the power of offensive generative AI for far more advanced, forward-looking and targeted attacks that are difficult to identify. There has been a tenfold increase in the number of deepfakes detected across all industries globally from 2022 to 2023, with the UK seeing the highest percentage of attacks as a proportion of fraud cases, second only to Spain. Deepfakes can take the form of voice, image, or video, posing a significant concern for our biometric data.
Generative AI is escalating the threat of ransomware attacks on educational institutions. With generative AI developing at pace, large parts of the attack process can be automated, unlike human-driven, targeted and tailored attacks, which cannot be done at scale. AI allows for automation in monitoring systems, codes to be changed and new domains to be registered, all without time-consuming human intervention.
With the convenience offered by AI, websites can now be created rapidly and effortlessly, eliminating the need for coding skills. This capability is also being exploited by fraudsters. In a matter of minutes, fraudulent websites can be designed to mimic the appearance of legitimate ones or be completely fraudulent but appear professional.
Readiness
There is limited knowledge in the education sector about how to use generative AI, and digital skills and infrastructure challenges are restricting its further use among teachers. Effective use requires strong foundational knowledge and a shift to a more digital mindset, which must come from the top. Furthermore, inadequate digital infrastructure and limited access to essential technology can impede the transition towards leveraging technology in the sector.
Conclusion: How safe use of AI can change the education sector
The World Economic Forum argues that generative AI has the potential to revolutionise education by enhancing personalised learning, supporting teachers and reimagining assessment methods. However, it requires careful integration and critical thinking to address challenges and maximise benefits. Those in the sector should consider the following areas when looking to harness this technology:
- Clear and up-to-date policies covering generative AI use, addressing learners, educators and broader issues like data privacy and copyright.
- Conducting a fraud and cyber risk assessment to understand the risk areas posed by generative AI.
- Providing training to upskill educators in understanding the capabilities and limitations of emerging technologies.
- Considering the tools and approaches used for exams and assessments.
- Emphasising a knowledge-rich curriculum to prepare students for future workplaces and safe AI use.
- Conducting research to understand the impact of generative AI on education.
Generative AI is expected to become an integral part of education, potentially even revolutionising it. The challenge for educational institutions is to keep pace with AI advancements and develop suitable policies, as well as a knowledge-rich curriculum, to prepare students for future workplaces and safe AI use.