Key Parliamentary Committees Approve AI Act
The AI Act received approval from key parliamentary committees in the European Parliament on 11 May, advancing towards plenary adoption in mid-June. The legislation aims to regulate Artificial Intelligence based on its potential to cause harm. The Civil Liberties and Internal Market committees adopted the text with a large majority.
Plenary Adoption and Trilogue Negotiations
Plenary adoption is tentatively scheduled for 14 June, after which the proposal will enter the final stage of the legislative process, initiating negotiations with the EU Council and Commission, known as trilogues.
Global Significance of the AI Act
Brando Benifei, a co-rapporteur for the legislation, highlighted its significance for the digital landscape globally, not just in Europe.
Definition of Artificial Intelligence
The AI Act’s definition of Artificial Intelligence was aligned with that of the Organisation for Economic Cooperation and Development (OECD), which is considering revising its definition. EU lawmakers adjusted the wording to reflect this potential change.
Banned AI Applications and High-Risk Practices
The legislation bans certain AI applications considered high-risk, such as manipulative techniques and social scoring. The list of prohibited practices was extended to include biometric categorisation, predictive policing, and scraping facial images for database creation. Emotion recognition software is prohibited in law enforcement, border management, and educational and workplace settings.
Biometric Identification Systems Debate
Biometric identification systems were a contentious point, with a majority in Parliament supporting a complete ban despite opposition from the conservative European People’s Party.
Inclusion of General Purpose AI Systems
The AI Act now addresses General Purpose AI (GPAI) systems, which were not included in its original scope. The legislation establishes a tiered approach, with obligations mostly falling on economic operators that integrate these systems into high-risk applications. GPAI providers must support compliance by supplying relevant information and documentation on the AI model.
Requirements for Foundation Models and Generative AI
More stringent requirements are proposed for foundation models, which are powerful general-purpose AI systems that can power other AI applications. Obligations include risk management, data governance, and independent vetting of the foundation model’s robustness. Generative AI models, such as ChatGPT, must disclose when a text is AI-generated and provide a summary of the training data subject to copyright law.
Stricter Regime for High-Risk AI Applications
The AI Act introduces a stricter regime for high-risk AI applications, amending Annex III to provide more precise wording in areas such as critical infrastructure, education, employment, and access to essential services. The fields of law enforcement, migration control, and administration of justice were expanded, and the recommender systems of social media platforms were included.
Obligations for High-Risk AI Providers and Users
High-risk AI providers now face more prescriptive obligations in risk management, data governance, technical documentation, and record keeping. Users of high-risk AI solutions must conduct a fundamental rights impact assessment considering potential negative impacts on marginalised groups and the environment.
Centralisation in Enforcement Architecture
Lawmakers agreed on the need for centralisation in the enforcement architecture, especially for cross-border cases. Co-rapporteur Dragoș Tudorache proposed establishing an AI Office, which would have a supporting role, providing guidance and coordinating joint investigations. The European Commission was tasked with resolving disputes among national authorities on dangerous AI systems.
Comments