Britain will become the first country in the world to ban the use of artificial intelligence tools to create child sexual abuse images.
The move comes as new requirements for regulating the use of AI come into force in the European Union, with the implementation of the EU AI Act.
Possessing, taking, making, showing or distributing explicit images of children is a crime in England and Wales. The new offences in the UK will target the use of AI tools to “nudeify” real-life images of children.
The move comes as online criminals increasingly use AI to create child abuse material, with reports of such explicit images rising nearly five-fold in 2024, according to the Internet Watch Foundation.
“We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” Britain’s interior minister Yvette Cooper said.
“It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public from new and emerging crimes.”
Predators also use AI tools to disguise their identity and blackmail children with fake images to force them into further abuse, such as by streaming live images, the government said.
Meanwhile, the EU hopes that by laying down strict rules relatively early in the technology’s development it will address potential dangers and help shape the international agenda for regulating AI.
The act bans the use of AI programs that exploit human vulnerabilities, such as the use of subliminal techniques and social scoring for public and private purposes, as used in China to reward or punish individuals for their behaviour.
“The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness,” the EU says, while warning against “new risks related to user safety, including physical safety, and fundamental rights”.
Certain powerful AI models currently in wide use “could even pose systemic risks”, it says.
Emotion recognition in the workplace or at educational institutions is banned, with an exception for medical or safety reasons, such as detecting fatigue in a pilot.
Biometric categorisation in public spaces, for example by camera surveillance, is also banned.
Police and other security agencies will be allowed to use facial recognition to track certain crimes, such as people-trafficking and terrorism.
Starting on Sunday, companies developing or using AI will have to assess their systems for the level of risk and take suitable measures to comply with the legal requirements.
“High-risk” AI applications include such as law enforcement and employment.
AI systems intended for use in high-risk areas will have to meet various standards spanning transparency, accuracy, cybersecurity and quality of training data.
Such systems will have to obtain certification from approved bodies before they can be put on the EU market. A new commission body called the AI Office will oversee EU-wide enforcement.
The AI Act also lays out more basic rules for general purpose systems that may be used in various situations – some high-risk, others not. For example, providers of such systems will have to keep certain technical documents for audit.
AI-generated content such as images, sound or text would also have to be marked as such to protect against misleading deepfake material.
The maximum fine possible in the AI Act – for using an AI system for a specific banned purpose – is up to 35 million euros ($A58 million) or seven per cent of a company’s annual revenue.
Fines for infringements of the AI Act’s other legal obligations for can be up to three per cent of revenue while supplying incorrect information to regulators can be up to 1.5 per cent.
with Reuters