European Ombudsman surveillance

Here’s how to fix the EU’s Artificial Intelligence Act

The European Union is getting back to work after the summer break, and one of the key files on everyone’s mind is the EU Artificial Intelligence Act (AIA). Over the summer, the European Commission held a consultation on the AIA that received 304 responses, with everyone from the usual Big Tech players down to the Council of European Dentists having their say.

Access Now submitted a response to the consultation in August that outlined a number of key issues that need to be addressed in the next stages of the legislative process. Here’s a quick refresher on some of our main recommendations.

Fix those dodgy definitions: emotion recognition and biometric categorisation

If you want to regulate something, you need to define it properly; if not, you’re creating problematic loopholes. Unfortunately, the definitions of emotion recognition (Article 3(34)) and biometric categorisation (Article 3(35)) in the current draft of the EU Artificial Intelligence Act are technically flawed.

Both definitions are limited to applications of AI that use biometric data, which is defined in Article 3(33) in line with its definition in the General Data Protection Regulation as data relating “to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person.” The key question here is: does all physiological data allow unique identification?

Emotion recognition and biometric categorisation can be done with physiological data that arguably doesn’t meet the high bar for identification required to be classed as biometric data (e.g. galvanic skin response). In such cases, providers could argue that their system is not subject to obligations under the AIA.

We also argue that the definition of biometric categorisation should not be limited to biometric data for the same reason, but there are further issues. You may remember the dodgy definition (a.k.a. phrenology) in the EU Artificial Intelligence Act:

– (35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data

This definition was roundly criticised by commentators both for being technically nonsensical and for normalising problematic uses of AI. This is because it includes categories that can’t and shouldn’t be inferred from physical, physiological or behavioural data, including sex, ethnic origin, and sexual or political orientation.

We’ve already campaigned for a ban on automated recognition of sex, gender, or sexual orientation, and the same reasoning applies to using AI to detect ethnic origin, political orientation or other complex human attributes. For example, to claim that ‘political orientation’, which is already a contentious concept, can be reliably inferred from physiological data rests on an idea of biological determinism that is in irremediable conflict with the essence of human rights. You can change your political views, but not your face.

We therefore propose a new definition that removes these problematic assumptions (see p.9 of our submission). However, fixing definitions is just step one; step two is prohibiting these dangerous applications of AI. After all, we have to properly define what we want to prohibit, don’t we?

Ensure Article 5’s prohibitions are actually effective

Since the early days of discussions about regulating AI in the EU, Access Now and other civil society organisations have called for red lines on applications of AI that are incompatible with fundamental rights. The EU Artificial Intelligence Act does contain provisions for red lines in Article 5 on ‘Prohibited Artificial Intelligence Practices’, however there are three major problems with Article 5 in its current form:

  1. Prohibited practices are too vague
  2. Many practices currently labelled “high risk” need to be prohibited
  3. Lack of criteria for prohibited practices

We therefore propose amendments to the existing prohibitions in Article 5, and recommend adding the following prohibitions:

  1. Uses of AI to categorise people on the basis of physiological, behavioural, or biometric data, where such categories are not fully determined by that data
  2. Uses of AI for emotion recognition
  3. Dangerous uses of AI in the context of policing, migration, asylum, and border management

We further propose a list of criteria to determine when a prohibition is required, and suggest that a provision be added to Article 5 to allow for the list of prohibition practices to be updated to ensure that the AIA remains future proof and can adapt to unforeseen, dangerous developments.

Mandate impact assessments and real transparency

We can only have an “AI ecosystem of trust and excellence” in the EU if we have real, meaningful transparency and accountability in how AI is developed, marketed, and deployed. While the AIA introduces some interesting measures to achieve such transparency and accountability, we argue that it needs to go further.

The AIA currently envisages little to no obligations on actors who procure and deploy AI systems (called “users” in the AIA terminology), with the entire conformity assessment procedure falling on those placing AI systems on the market (called “providers”). While we welcome the obligations placed on providers, and would even suggest that they could be strengthened, we need more obligations on users.

The deployment of an AI system, in particular circumstances and with particular aims, will have a significant impact on the risk it poses to fundamental rights. The idea that a provider of an AI system can foresee all possible risks in abstraction for the concrete context of use is thus flawed.

We therefore recommend:

  1. All users of high-risk AI systems should be obliged to perform either a data protection impact assessment (DPIA), or, where a DPIA is not applicable, they should be required to carry out a human rights impact assessment (HRIA). This would ensure that in all cases, a user procuring a high-risk AI system will have performed some form of impact assessment to address how their use of this system, in these particular circumstances, will impact fundamental rights.
  2. Article 60’s EU database for stand-alone high-risk AI systems (which currently only contains information about what systems are on the market) must also be extended to where those high-risk systems are being deployed and used. As noted above, the context of use of an AI system has a huge impact on how it affects fundamental rights. From the perspective of civil society, and from the people affected by and interacting with AI systems, what matters most is knowing where a high-risk AI system is actually deployed or in use.
  3. All impact assessments carried out by public authorities in the context of procuring a high-risk AI system should be publicly viewable in their full, unredacted form, and the database should contain all impact assessments, whether DPIAs or HRIAs, carried out by private sector users of high-risk AI systems. Where necessary, these may be redacted, but the full unredacted version should also be contained in the database, accessible to enforcement bodies, and be accessible to the public on request.

Let’s keep working to protect human rights in AI

While the current proposal provides a workable framework for regulating harmful applications of AI, it requires serious modifications in a number of areas. For more information on the recommendations outlined here, and for further recommendations on other aspects of the AIA, check out our full submission to the European Commission’s consultation.

Access Now will be working with the co-legislators in the coming months to ensure that these issues are addressed.