Ahead of the Artificial Intelligence (AI) Action Summit which France will host early next month, Ayça Atabey explains why it is essential to adopt a children’s rights-based approach to AI used in education.
Why does the intersection of AI innovation, education and children’s rights demand urgent global attention? As AI reshapes education and affects educators’ and students’ lives, it is already clear that questioning its impact on human rights, particularly their right to education, is essential. There is a pressing need for a children’s rights-based approach in AI and education discussions, ensuring that technological advancements uphold their promises and create an inclusive, empowering future where every child can truly benefit from AI-driven innovation in learning.
The Digital Futures for Children centre facilitates research for a rights-respecting digital world for children. It supports building an evidence base for advocacy, facilitates dialogue between academics and policymakers, amplifies children’s voices, and works to ensure children’s rights are upheld in the digital world. As a part of this mission, drawing primarily upon UK sources, we responded to a call for contributions by the UN’s Special Rapporteur on the right to education, on AI in education and its human rights-based use at the service of the advancement of the right to education.
The promise and reality of AI in education
AI is widely presented as a solution to key challenges in education, for example, by reducing teachers’ administrative and marking workloads, improving accessibility, supporting personalised learning, and helping children with diverse needs and abilities (e.g., through the creation of accessible materials for students with additional needs).
While there is significant hype around these potential benefits, AI’s actual educational value and ability to fulfil its promises remain uncertain, and robust evaluation frameworks to assess its effectiveness are lacking. In addition to the lack of independent, research-based evidence on its claimed benefits, the use of AI in education raises significant risks and concerns, including unknown short and long-term impacts on children’s lives, data-related harms, and the potential to exacerbate existing inequalities – particularly affecting non-native English speakers and students from marginalised communities. Notably, teachers’ growing reliance on AI detection tools has already led to students’ work being disproportionately flagged as AI-generated, potentially undermining trust between students, teachers, and institutions. Relatedly, the UK Office of Qualifications and Examinations Regulation (Ofqual) noted that using AI as the sole method for marking student work is unlawful due to potential biases, inaccuracies, and lack of transparency.
The reality of AI in education is complex. While AI tools have the potential to enhance learning and teaching experiences, they also introduce significant risks that can undermine children’s rights, including their right to education. Notably, what might not initially appear to be an education-related risk, such as biased algorithms or commercially exploitative data practices – can ultimately undermine the very purpose of these AI tools, refuting their intended educational purpose and undermining the trust and fairness that education systems must rely on.
What role should regulation play?
Effective regulation is key to ensuring that children can benefit from the use of AI in education while enjoying their rights. Regulating AI use in education is an urgent matter that demands a comprehensive approach, including integrating children’s rights into all emerging legislative and policy efforts. It is also necessary to ensure that AI regulation developments align with existing frameworks, such as data protection and copyright laws while embedding ethical and pedagogical considerations into emerging AI governance frameworks. In the UK, there is no binding legal framework for AI regulation. However, there have been AI policies, including the previous pro-innovation approach to AI regulation White paper and, currently, a binding legal framework for AI regulation is being discussed in Parliament. In the meantime, the UK GDPR and copyright laws remain relevant when implementing AI in education and must be interpreted and applied to children’s rights.
As always, we highlight the ICO’s Age-Appropriate Design Code (AADC) as an excellent example that puts children’s rights at its heart when translating data protection law principles into tangible design standards for online services. It requires considering how data practices can be in children’s best interests and respecting their rights under the UNCRC. However, our previous research showed that the existing legal frameworks and their enforcement aren’t sufficient to protect children’s data and uphold their rights in educational settings in the UK. As a solution to address this problem, the Digital Futures for Children centre proposed a Blueprint for Education Data and an EdTech Code of Practice supporting a certification scheme to provide guardrails for rights-respecting innovation and enable the development of a trusted data infrastructure to the benefit of children, parents, schools, businesses and the public purse.
Why do we need a child rights-based approach in regulating AI use in education?
AI is transforming education, but are children’s rights being sidelined in this process? As schools and policymakers are keen to integrate AI into education, assessing its real-world impact on children’s rights is essential. While AI presents significant potential, its adoption has also generated considerable hype, with many promised benefits that remain unproven, resulting in an EdTech tragedy in which AI tools fail to meet expectations or live up to their claimed benefits and intended purposes. Crucially, children want to understand both the benefits and risks of AI in education, they are worried about false promises and want their agency and rights to be respected.
General comment 25 by the UN Committee on the Rights of the Child sets out how to implement the UNCRC in relation to the digital environment. As the authoritative document setting out how children’s rights apply in the digital environment, this must serve as our guiding light when looking at AI use in education. We highlight that children are unlikely to fully enjoy their right to education if their other rights are not protected. As Professor Sonia Livingstone notes “The right to privacy and data protection increasingly mediates their other rights: in effect, without privacy it is hard to learn and participate without being exploited, stay safe, and thrive.” Notably, how children’s data are handled, and their privacy is protected can affect their learning motivation and education. Invasive monitoring in schools can undermine the right to privacy, to freedom of thought, and negatively affect their mental health. Since children’s rights do not exist in isolation, undermining other rights can, in turn, also undermine children’s right to education, which shows why a holistic, child-rights-based approach is essential in AI governance frameworks that apply to education.
As AI continues to impact education, it’s critical to remember that technology is a tool to support education, not a solution to all challenges. Above all, children’s rights must always come first. We call on all stakeholders to adopt a child-rights lens in discussions and practices, guided by the UNCRC and General comment No. 25, to ensure AI use is beneficial, fair, and responsible when it comes to children’s education.
This post gives the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
Featured image: Photo by Emily Wade on Unsplash