The first set of restrictions under the European Union’s landmark AI Act went into effect from Sunday, February 2 onwards. This means that AI systems that are deemed as ‘unacceptable risk’ under the legislation are now illegal in countries within the bloc.
The following categories of AI systems have now been banned under the legislation as they are considered to be “a clear threat to the safety, livelihoods and rights of people”:
– Social scoring systems
– Emotion recognition AI systems in workplaces and education institutions
– Individual criminal offence risk assessment or prediction tools
– Harmful AI-based manipulation and deception tools
– Harmful AI-based tools to exploit vulnerabilities
Practices such as the untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; biometric categorisation to deduce certain protected characteristics; and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces have also been banned.
However, critics have pointed out that the AI Act has several exemptions allowing European police and migration authorities to use AI for tracking terror attack suspects.
Legal obligations to ensure sufficient technology literacy among staff is also one of the provisions of the AI Act that came into force after Sunday.
Companies who fail to comply with the AI Act could face fines of around 35 million euros ($35.8 million) or 7 per cent of their global annual revenues (whichever amount is higher), according to a report by CNBC.
Story continues below this ad
The first-of-its-kind regulatory framework for AI was officially rolled out in August last year. However, multiple provisions of the law are being implemented in phases. For instance, the governance rules and obligations for tech companies that develop general-purpose AI models will come into force from August 2, 2025, according to the official website.
General-purpose AI (GPAI) models refer to large language models or LLMs such as OpenAI’s GPT series. Companies that develop high-risk AI systems for use cases in critical sectors such as education, medicine, and transport, have an extended transition period up till August 2, 2027.