In a victory for fundamental rights, the European Parliament’s Artificial Intelligence Act (AI Act) text has been given the green light in the Committees for Internal Market and Consumer Protection (IMCO) and for Civil Liberties, Justice and Home Affairs (LIBE). The AI Act is a cornerstone legislation that should ensure people across the EU and at the border are protected from surveillance and put before Big Tech profits.
While some troubling gaps remain, the Parliament’s text integrates key protections and human rights safeguards that Access Now and coalition partners have been advocating for.
Access Now particularly welcomes the following aspects of the text:
- Banning biometric surveillance: a full ban on real-time remote biometric identification in public spaces and most uses of post or retroactive RBI, and a ban on emotion recognition in law enforcement, border management, workplaces, and education;
- Stopping racist, discriminatory AI: bans on discriminatory biometric categorisation, predictive policing, and the mass scraping of biometric data to create surveillance databases;
- Addressing human rights impacts: an obligation to perform a fundamental rights impact for deployers of high-risk AI systems, which public authorities have to publish;
- Increased transparency: the expanded scope of Article 60’s publicly viewable database to include deployments of high-risk AI systems by public authorities; and
- Strengthening protections for people on the move: inclusion in scope of EU migration databases, and addition of new high risk systems such as all biometric identification systems, and predictive analytics and AI border surveillance in the migration context.
While this vote is an important step in the negotiations, the European Parliament must confirm and strengthen this text in the upcoming June plenary vote.
The AI Act still has serious shortcomings. Access Now urges EU co-legislators to prioritise the following issues in the upcoming trilogue negotiations:
- Eliminating loopholes: remove the additional layer added to Article 6 that allows a self-assessment based carve out from high risk classification that undermines the already flawed risk-based approach and creates a serious loophole that threatens to turn the AI Act into self-regulation;
- Circumventing rights abuses in the migration context: add bans on automated risk-assessment in migration procedures and predictive analytics systems used to curtail migration movements must be added; the amendment on Article 83 must be strengthened and the grace-period concessed to operators of EU migration databases must be considerably shorten; and
- Preventing state overreach: preserve the scope of the Regulation, and reject any attempt to exclude national security, law enforcement, and migration authorities.