The EU Artificial Intelligence Act – An Intelligent Piece of Legislation?
At the Global AI Safety summit last November at Bletchley Park – the base of UK code breakers during the Second World War, US Vice-President Kamala Harris said: “Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyze global action and build global consensus in a way no other country can.” So, what does that mean for the ambition of the EU in terms of setting the global rules for AI?
There is excitement among investors that the implications of a rapidly developing AI will change the way we live and offer enormous benefits in education, energy, environment, healthcare, manufacturing, transport. The arrival of consumer facing AI, exemplified by the meteoric rise of ChatGPT has made the workings of machine learning models more visible and led to acute policy concerns about safety, personal data, intellectual property rights, industry structure, and generally about the black box nature of the technology.
In this context the big economies are trying to identify and mitigate risks, shape a global policy agenda and compete for setting a regulatory blueprint. The Bletchley Declaration contains commitments to intensifying and sustaining cooperation. Anticipating the summit, the US administration issued an executive order on Safe, Secure and Trustworthy Artificial Intelligence building on voluntary commitments from the seven key US companies: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. China had already adopted Interim Measures for the Management of Generative Artificial Intelligence Services. However, it is the EU that is most advanced in adopting comprehensive and detailed legislation. The new rules are complex and will affect science and technology policy globally, even if other jurisdictions go down different paths.
The Brussels compromise machinery
On 8 December 2023, after negotiating for more than two years, European Union lawmakers reached final agreement on the much-anticipated AI Act. It is the first extensive law in the world that governs the development, market placement and the use of AI systems and will apply beyond the EU’s borders. The European Parliament and the Council confirmed that several relevant changes to the Commission’s original proposal from 2021 had been included, inter alia new requirements to conduct a fundamental rights impact assessment for certain AI systems, a revised definition of AI and more stringent rules on high-impact foundation model providers.
The higher the risk that an AI system poses to health, safety or fundamental rights, the stricter the rules. The AI Act establishes three categories. The following AI systems are considered to be a clear threat to the fundamental rights of people, pose an unacceptable risk and therefore will be banned:
- Biometric categorization systems that use sensitive characteristics such as political, religious and philosophical beliefs, sexual orientation, and race.
- Untargeted/Random collecting/use of facial images from the internet or closed-circuit television footage to create facial recognition databases.
- Emotion recognition in the workplace and educational institutions.
- Social scoring based on behavior or personal characteristics.
- Systems that manipulate human behavior to circumvent free will.
- AI used to exploit the vulnerabilities of people due to age, disability, or social or economic situation.
AI systems will be classified as high risk due to their significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law. Examples include certain critical infrastructures in the fields of water, gas, and electricity, medical devices, and systems for recruiting people. Certain applications used in the fields of law enforcement, border control, and administration of justice and democratic processes also will be classified as high risk. They must undergo mandatory fundamental rights impact assessments and will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information and human oversight.
Most AI systems are expected to fall into the category of minimal risk. These applications – such as AI-enabled recommender systems or spam filters – will benefit from a free pass and absence of several obligations.
Moreover, the AI Act sets transparency requirements. An example of which is that users should be made aware when interacting with a chatbot. Deep fakes and other AI-generated content will have to be labelled as such, and users need to be informed when biometric categorization or emotion recognition systems are being used.
The regulation also introduces guardrails for general-purpose AI models which will require transparency along the value chain. For models that could pose systemic risks, there will be additional obligations related to managing these risks and monitoring serious incidents, as well as performing model evaluation and adversarial testing.
Noncompliance with the AI Act can lead to fines ranging from € 7.5 million, or 1.5% of a company’s global turnover, to € 35 million, or 7% global turnover, depending on the infringement. Furthermore, the agreement provides more proportionate caps on administrative fines for small and medium-sized enterprises (SMEs) and startups. Each nation’s competent authorities will supervise implementation. To ensure a harmonized implementation, the Commission will introduce a European AI Office. Along with the national market surveillance authorities, the Office will be the first body that enforces binding rules on AI globally.
Voting by feet?
The EU AI Act offers a blueprint for a comprehensive risk-based approach that seeks to prevent harmful outcomes ex ante, for all AI systems – before the event. Societies should regulate the development of unacceptably high-risk technologies ex ante, such as nuclear power. Some experts say that AI is more like electricity – a general purpose technology. In their view, rather than requiring detailed risk declarations, ex post liability rules could enable open-source innovation, an area in which European firms have particular strengths.
The complex risk hierarchy of the EU approach contrasts with the US focus on national security risks, where federal executive can compel AI companies. The comprehensive ex ante risk assessments may impede innovation and concentrate investments in more lenient jurisdictions. In general, many experts are sceptical that legislators can anticipate the future of a general-purpose technology such as AI. At least, EU legislators have tried.
This content is licensed under a Creative Commons Attribution 4.0 International license.