ITFlows artificial intelligence

The EU AI Act: How to (truly) protect people on the move

The European Union Artificial Intelligence Act (EU AI Act) aims to promote the uptake of trustworthy AI and, at the same time, protect the rights of all people affected by AI systems. While EU policymakers are busy amending the text, one important question springs to mind: whose rights are we talking about?  

In the current proposal, the EU AI Act fails to address the impacts that AI systems have on non-EU citizens and people on the move, such as people fleeing from war. In fact, the proposal does not include the manifold uses the EU puts to AI systems in the migration context. From purported “lie detectors” to biometric identification systems, EU migration policies are more and more underpinned by the use of AI which the EU AI Act seeks to regulate. 

In this blog post, we present three steps policymakers should take to make the AI Act an instrument of protection for people on the move. These steps are based on our proposed amendments on AI and migration developed by Access Now jointly with EDRi, PICUM, Petra Molnar, and Statewatch. 

Step 1 – Be brave: use the ban word!

Some AI systems pose an unacceptable risk to our fundamental rights which cannot be fixed via technical means or legal and procedural safeguards. For this reason, Article 5 of the AI Act sets out the possibility of prohibiting certain uses of AI and it is meant to guarantee that our core values are not compromised.

However, this article ignores AI systems used in migration that pose an unacceptable risk. That is why the first action lawmakers must take is to ban those AI systems that would irremediably harm the fundamental rights of people on the move. They are: 

  1. Risk assessment and profiling systems: In the context of migration and border checks,  the EU uses AI systems to filter “legitimate” from “illegitimate” travellers. One example is the “visa streaming algorithm” that the UK Home Office used to screen visa applications and that, in 2020, was suspended as it “entrenched racism and bias into the visa system”, according to the UK Home Office itself. Such automated-profiling systems are inherently problematic. Civil society organisations and researchers have raised concerns regarding the intrinsic bias of automated-profiling systems, as these systems are irremediably contaminated with problematic biases and can reinforce existing forms of oppression, racism, and exclusion, which cannot be mitigated via technical means.
  2. “Lie detectors”: This pseudo-scientific technology claims to infer someone’s emotional state, intentions, or the state of mind on the basis of their biometric data, or other data relating to their physical, physiological, or behavioural characteristics. In migration, there is a growing appetite for using these systems to assess people’s credibility during migration procedures. One example of this is iBorderCtrl, an EU Horizon 2020-funded project that tested the use of an avatar that analyses people’s non-verbal micro-gestures and verbal communication to determine the traveller’s intention to deceive. Because of the intrinsic bias of these technologies, there is a great risk that such systems will misinterpret cultural signifiers that do not match those of the people they were trained on. Moreover, these systems represent an egregious form of surveillance which hampers a wide range of fundamental rights.
  3. “Predictive analytics”: These systems may generate assumptions that particular groups of persons present a risk of “irregular migration” and may encourage or facilitate preventative or other responses to forbid or halt their movement. Human rights defenders have reported widely how the increase in border management operations to combat irregular migration exacerbates violence, degrading the treatment of people on the move, and eventually leading to pushbacks
Step 2 – Expand the list of “high-risk” AI systems used in migration   

Beyond prohibiting certain uses, the EU AI Act can ensure that “high risk” AI systems are subject to careful scrutiny and accountability measures. Currently, the proposal fails to capture all the use cases that affect people’s rights, including:

  • Biometric identification systems: Biometric identity checks are a recurring feature in EU migration policy, in particular as part of a broader strategy to combat identity fraud and increase the number of deportations. These systems include mobile biometric identification devices that let migration authorities scan fingerprints or faces in the streets and automatically compare the biometric data against a database or a watchlist. Civil society has denounced how these systems can facilitate and increase the unlawful practice of racial profiling, warning that ethnicity or skin colour could serve as a proxy for an individual’s migration status. Because of the severe risks of discrimination that come with the use of these systems, lawmakers must ensure the EU AI Act addresses their use.  
  • AI systems for monitoring and surveillance border control: As long as regular pathways to the EU territory remain scarce, people will cross European borders via irregular means. In this context, authorities are increasingly using AI systems for generalised and indiscriminate surveillance at borders. Civil society organisations and investigative journalists have documented the harms that such technologies can cause, both in terms of exacerbating violence and facilitating illegal pushbacks. Given the elevated risk of violation of fundamental rights and broader structural injustices, lawmakers should include all AI systems that are part of a border control system within the scope of this Regulation. 
Step  3 – Are you serious about this? Protect, don’t surveil, people on the move. Abolish Article 83 

The fundamental rights of non-EU citizens and people on the move ultimately depend on one single article, namely Article 83. But it is not made to protect people. To the contrary. 

This provision concerns all the large EU IT databases used in migration and that form the EU interoperability framework, which both civil society and scholars have denounced as a form of surveillance of non-EU citizens. These databases use, or intend to use, AI systems that would otherwise fall under the scrutiny of the Regulation, such as the automated risk-assessments in the European Travel Information and Authorisation System (ETIAS) and the Visa Information System (VIS).

Several influential migration and tech scholars have recently expressed their concern on this exemption. If this is maintained, it risks the fundamental rights of non-EU citizens. Not only would all the obligations and safeguards outlined in the AI Act not apply and the systems in question would go unregulated, but this provision would reinforce the notion that the EU takes a differential approach to fundamental rights when it comes to people on the move. 

It is therefore of utmost importance that lawmakers amend or strike Article 83.  Fundamental rights apply to everyone, everywhere, regardless of residence status — full stop. 

Next steps for the EU AI Act

The EU AI Act is a crucial opportunity to challenge the system of surveillance the EU has been imposing on non-EU citizens and people on the move. 

Those working to amend the EU AI Act now have the chance to end the normalisation of a system based on the oppression of marginalised communities and advocate for a use of AI that centres respect for fundamental rights.

Will policymakers seize this opportunity?