“Any sufficiently advanced technology is indistinguishable from magic.”
Arthur C Clarke’s words remind us that while AI might feel like magic — creating award-winning photography and outperforming humans in English understanding — it’s important for students and teachers to understand that there’s no “magic” involved, just algorithms, data and patterns.
While these algorithms, data and patterns make AI possible, they also drive ethical concerns, including exacerbating the digital divide.
This is where educators play a pivotal role in preparing AI-literate students for an evolving, AI-powered future workforce.
Despite 69 per cent of Australian students already using AI chatbots like OpenAI’s ChatGPT and Microsoft’s Copilot, almost half of these young people say they lack confidence in their AI skills — with girls less confident than boys.
From practical teaching strategies to addressing the risks of bias and misinformation in AI, educators must guide our students in an increasingly AI-driven world.
AI is pervasive
AI predates tools like ChatGPT. Technologies like Google search, email spam filtering, real-time translation and your smartphone’s autocorrect are all the result of decades of quiet progress in AI.
By 2030, as many as 1.3 million workers (9 per cent of Australia’s workforce) may need to transition to occupations that use AI technology. This is in addition to the 200,000 new jobs directly created by AI.
As AI becomes more powerful and pervasive, equipping the next generation of Australians with AI literacy is critical.
Loading…
Internationally, governments are investing in building AI capability — in the UK, Prime Minister Keir Starmer recently announced a Google-backed AI campus to build AI capability in young people.
AI technology will not just affect the tech sector. It will increasingly augment human capabilities, making it essential for workers to adapt to this technological landscape, whether they are in tech or non-tech roles.
Today’s students will be tomorrow’s leaders in an AI-powered world. But they won’t get there on their own.
What Is AI literacy?
AI literacy is becoming a fundamental skill, akin to reading, writing and digital skills.
It goes beyond using AI tools. It requires a holistic understanding of how AI works, including its limitations and risks.
AI literacy equips students with the ability to evaluate AI outputs critically and make informed decisions, also helping to inform their own use of AI in their learning and everyday life.
At the heart of AI literacy is critical thinking. Students must be able to fact-check AI-generated content, recognise biases and understand when AI might be misleading or incorrect.
Students also need to grasp the ethical implications of AI, including data privacy and the social impacts on employment, creativity and concepts of intellectual property.
As more journalists and media outlets use AI to produce content, errors or mistruths can spread rapidly and become amplified.
Teachers play a vital role in helping students distinguish fact from fiction and how to source credible information.
This is especially important given that AI systems, while designed to sound plausible, can sometimes hallucinate false information to satisfy user prompts.
Loading
Addressing AI’s ethical challenges
Privacy laws currently lag behind AI’s rapid development, burdening individual users to be aware of and manage what information they provide to AI companies when using the tools — and to educate themselves on how their data is used.
In a world where we seek to reduce inequalities, we must remember that since AI is trained on existing data, it can perpetuate historical biases and reinforce stereotypes unless the underlying datasets are carefully managed.
AI bias can be a result of the initial training data or the algorithm used to drive the prediction or output. Models trained on biased or discriminatory data (algorithmic discrimination) can extend and amplify existing inequalities for certain groups and individuals.
This is particularly concerning as the data currently used often reflects predominantly white, Western, male-centred norms, which has led to the persistence of outdated stereotypes, such as women being cast as homemakers and men as professionals.
Implications extend beyond stereotyping — for example, in a medical setting, a model trained on data that excludes a population group (e.g. Indigenous Australians), or that fails to include this group in its algorithm, may result in harmful treatment plans or inaccurate diagnoses.
AI-powered automated insurance or lending decisions may perpetuate historically discriminatory practices and exclude some borrowers unfairly.
Similar risks have been identified in policing and criminal justice applications; image generation and editing; and recruitment.
Loading
It is critical that students and educators recognise that they must think critically about AI-generated content and recommendations, including the potential flaws and bias in the data used to train the underlying models, their algorithms and the generated output.
The digital divide in education
As AI becomes more prevalent, the digital divide risks widening, disproportionately affecting disadvantaged students who may have limited access to resources, including devices, connectivity and educational opportunities.
Some children also live in situations with limited or insufficient supervision, increasing the risk of exposure to harm.
Ensuring equitable access to AI-powered tools is a challenge that must be addressed.
AI-powered educational tools offer exciting possibilities, but they also face significant limitations.
These include biased information and the potential to generate inaccurate output and dangerous or age-inappropriate content.
To build trust in AI as an educational tool, strong safeguards must be in place to ensure AI outputs align with educational standards and pedagogical theories.
Loading…
Supporting teachers and students with AI literacy
AI literacy is multi-pronged. Curriculum design, teacher training, resource design, and student and teacher perspectives are equally important.
The design and development of one specific subject in schools to cover AI literacy is not reliable. AI literacy should instead take a more holistic approach by transferring AI knowledge and methods to core subjects.
For example, basic machine learning algorithms could be taught in mathematics. Or in history, students could compare historical images to those generated by AI, to learn how to discern accurate sources. The linguistic and contextual design of prompts could be an English module. Ethical implications can be covered in a social science subject. At the moment, learning about AI is limited to the Digital Technologies curriculum.
Meaningful learning requires collaborative and active teaching strategies, where real problem-solving is at the core of the learning process.
Programs like PopBots and Scratch help students learn programming and AI in a play-based environment, fostering computational thinking.
Hands-on activities, such as exploring dataset bias and assessing whether data is representative and fair, encourage critical thinking about AI and ethical reasoning — key skills in a world increasingly shaped by this technology.
Research has found project-based, human–computer collaborative play and game-based learning approaches are successful AI literacy education methods.
The future of AI tools in education will focus on personalised and inclusive learning experiences, while supporting teachers with routine administrative tasks, such as marking or lesson planning.
AI can analyse student data to identify learning gaps and suggest targeted interventions. This allows teachers to spend more time on meaningful student interactions, focusing on creativity, emotional intelligence and critical thinking — all while teaching students how to collaborate effectively with AI.
As AI continues to reshape our world, teachers are uniquely positioned to prepare students with the skills they need to succeed.
AI literacy is not just about understanding the technology; it’s about fostering critical thinking, ethical awareness and the ability to work alongside AI responsibly and effectively.
Day of AI Australia is a free classroom-based program that provides foundational AI literacy for students and teachers in Australian schools. Developed by AI and education experts, it is aligned with the Australian curriculum.
Natasha Banks is Program Director for Day of AI Australia. Associate Professor Lynn Gribble is an education-focused academic at UNSW’s School of Management & Governance and co-leads the AI Community of Practice. Dr Jake Renzella is Senior Lecturer, Director of Studies (Computer Science) and Co-Head of the Computing and Education research group in UNSW’s School of Computer Science Engineering. Dr Sasha Vassar is a senior lecturer and Nexus Fellow at UNSW’s School of Computer Science and Engineering.