AI Act

Human rights protections…with exceptions: what’s (not) in the EU’s AI Act deal

On Friday, December 8, just before midnight, the European Union’s co-legislative bodies reached a deal on the Artificial Intelligence Act, following three days of intense negotiations, and almost two-and-a-half years since the AI Act was first proposed in April 2021.

Negotiators from all sides of the political spectrum have effusively declared victory in triumphalist press releases and public statements issued since Friday night — but we aren’t celebrating. Opening the press conference to announce the deal, Spain’s Secretary of State for Digitalisation and Artificial Intelligence, Carme Artigas, said that there were “two narratives that I don’t want anybody to buy today,” claiming that not only had all outstanding political issues been settled, but also that the final text includes “no loopholes” undermining fundamental rights protections. Unfortunately, she’s wrong on both counts. Read on to learn why that is, why we need to keep scrutinising these final stages of the process, and why the fight to protect fundamental rights isn’t over yet. 

What the deal does — and what it doesn’t do 

The political agreement reached in trilogue on Friday does not in fact include a definitive legal text, nor is it set in stone. This makes upcoming “technical meetings,” which will finalise the text before it is officially adopted by both the Parliament and Council, incredibly important, especially considering some countries still do not consider it a done deal. POLITICO reports that the French government, for example, says they will “continue discussions to ensure that this regulation, and the [AI Act’s] final text, preserve the capacity for innovation.”

While the law’s most controversial and crucial aspects are unlikely to change significantly, this still leaves a lot of room for error in how definitions are finalised and exceptions formulated. It’s also impossible to say how good or bad even the finalised wording is because it hasn’t been made public.

As an example, let’s look at the prohibited practices mentioned in the Council of the EU’s press release: namely “cognitive behavioural manipulation, the untargeted scrapping [sic] of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.” 

While this might sound impressive at first glance, the devil is really in the details. When it comes to prohibition on emotion recognition, for instance, Access Now and other civil society organisations have called for a full ban throughout the AI Act negotiations. Earlier this year, the European Parliament adopted a ban on emotion recognition in just four contexts: education, workplace, law enforcement, and migration. But under pressure from member states, the prohibition on using it in law enforcement and migration contexts was removed from the final text, perfectly demonstrating the AI Act’s two-tiered approach to fundamental rights, which deems migrant people and already-marginalised people less worthy of protection. 

The deal also apparently allows exceptions to the ban on emotion recognition for medical or safety purposes – a gaping loophole that could prove extremely dangerous. Companies already sell so-called “aggression detection” systems that label images of Black men as more aggressive than white men; if such a system were deployed in schools or workplaces, it could lead to racist surveillance of Black students or workers, for example. Again, we won’t know how bad this is until we see the final text – which also applies for all the other bans mentioned in the press release. 

Worse still, when asked about the proposed ban on remote biometric identification (RBI) – a key civil society demand endorsed by the EU’s own data protection authorities and the European Parliament – Commissioner Thierry Breton called it “a full ban…with only three exceptions.” But a ban with three exceptions is anything but a full ban; it’s a guidebook on how to use a technology that has no place in a democratic, rights-based society. 

We also don’t yet know how the AI Act will regulate how law enforcement and migration authorities use dangerous AI systems. As Access Now, PICUM, EDRi, and others have highlighted, the AI Act could have been a tool to protect everyone, regardless of migration status, but its power to meaningfully regulate state powers was attacked by member states. The following issues thus demand close attention once the AI Act deal text is public:

  • Exemption on national security. This dangerous loophole could allow member states to exempt themselves from the rules for any activity they deem relevant for “national security.”
  • Transparency obligations for law enforcement and migration authorities. The final text will tell us if state powers will be subject to public scrutiny or will instead be able to use risky AI systems with impunity.
  • List of high-risk systems. We don’t yet know if the list covers AI systems used for border patrolling, forecasting tools, or other forms of biometric identification.

The high-risk classification, which Access Now and others have long criticised, was also made much worse by introducing a structural loophole. The European Commission’s initial proposal was logical, in that all use cases  in Annex III’s list of high-risk applications should follow specific obligations. But as industry and state actors became increasingly worried to see that list grow, they successfully lobbied for the addition of a “filter” into that classification system, despite campaigning by Access Now and others, a letter from the UN High Commissioner for Human Rights, and a damning negative opinion on the filter from the Parliament’s own Legal Service. As a result there are broad criteria offering developers avenues to exempt themselves from obligations to protect people’s rights, confirming that the AI Act places industry interests over people’s fundamental rights.  

What was never in the deal to begin with

Even as we consider what was won and lost last week, we shouldn’t overlook the glaring gaps that have always existed in the AI Act. 

Digital guardrails alone are not sufficient to tackle racism and discrimination in our societies. But the initial hope for the AI Act was that it would safeguard people against the worst uses and abuses of technology so that AI doesn’t perpetuate, exacerbate, or lead to human rights violations on the individual and collective level. 

One example of how a digital policy could do little to address institutionalised racism inherent to EU policy-making is the reluctance to prohibit the most dangerous AI systems used in the migration and border contexts even by the European Parliament, meaning that protections for migrant people were absent from the very outset. Similarly, the AI Act’s “risk based” approach has always been problematic, in that it narrowed human rights protections to a limited number of use cases. 

Conclusion

In the coming weeks and months, the EU will gloat about being the world’s regulatory trendsetter, while plenty of ink will be spilled about the so-called “Brussels effect” and the expectation of inspiring similar legislation around the world. But unlike the GDPR, the AI Act is no gold standard. From its earliest days, it was already a concession to industry and law enforcement and now, despite the best efforts of civil society and a group of dedicated MEPs and their staff, it promises to protect police and private companies far more than people.