Algorithmic accountability

European Parliament shows (again) its stance on Artificial Intelligence

On Thursday 13 February, the European Parliament passed a Resolution calling for safeguards at the European level to protect consumers in the context of Automated Decision-Making (ADM). This Resolution was put forward by MEP Petra De Sutter on behalf of the Committee on the Internal Market and Consumer Protection (IMCO).

The topic of ADM, also addressed under the terms of artificial intelligence (AI) or algorithmic systems, has been receiving considerable attention in Europe recently, and particularly since Ursula von der Leyen’s commitment to address the human and ethical implications of AI through legislation within her first 100 days in office as President of the Commission. This commitment will for now take the form of a whitepaper, to be released on February 19. The potential form that the eventual “AI regulation” could take has captured the political imagination of the EU bubble and beyond, with options ranging from a horizontal law on AI, through sector or application specific sectoral approaches (including calls for a ban of specific applications such as facial recognition technology), to the review of existing legislation on liability regimes and product safety.

In line with the German Data Ethics Commission, the Resolution “stresses the need for a risk-based approach to regulation”. In the context of safety and liability, it “calls on the Commission to develop a risk assessment scheme for AI and automated decision-making in order to ensure a consistent approach to the enforcement of product safety legislation in the internal market”. The core of the Resolution, however, is an invitation for the Commission to review existing European law and ensure it is adequate in light of the emergence of ADM technology in various domains, of its evolution and of the risks it represents for consumers. In particular, the Parliament directs the Commission’s attention to the frameworks of consumer law, product safety and market surveillance. 

While Paragraph 9 of the Resolution stresses that the GDPR already addresses many aspects of the topic, it is worth noting that the Parliament adopted the proposed Resolution as a whole except for three words: “data protection legislation” in recital D. MEPs specifically voted not to include data protection legislation in its call to examine the existing European body of law on “whether it is able to respond to the emergence of AI and automated decision-making”. While there are areas where adequate personal data and privacy protections require further actions (such as the conclusion of the ePrivacy reform and potential additional rights in the specific context of ADM), the enforcement of the GDPR must remain a priority without opening any doors to weaken existing norms. 

In addition to calls for a risk-based approach to regulation and for reviewing existing frameworks, the Resolution calls attention to the need for specific, additional safeguards protecting consumers, such as explainability, human responsibility and control, or access to remedy. The number of such additional safeguards is limited, however, compared to those put forward by existing initiatives such as the EU AI High level Expert Group in its Ethics Guidelines for Trustworthy AI. A number of amendments were proposed by the Greens/EFA which would have addressed this issue by adding to the call, among other things, mandatory risk assessments to take into consideration ecological, economic and social sustainability in the design and use of algorithmic systems, or mandatory disclosure of the software documentation and the datasets to market oversight authorities. Unfortunately, these amendments were rejected, and thus the number of safeguards covered in the Resolution remains low.

While the Resolution’s focus on consumer protection allows it to address a number of serious issues, it by no means covers the full range of threats posed by ADM, such as those raised by its use in policing, social welfare, or public administration in general. To point to just the most recent example, the Dutch AI-based fraud detection program, SyRI, is not a consumer-facing issue, but last week’s court decision shows that this is clearly also an area where AI/ADM threatens the rights and welfare of individuals.

Resolutions are, of course, non-binding tools, and this is not the first time the Parliament adopts a Resolution on AI (see here and here for example). Nonetheless, the work of the European Parliament on the topic signals both an awareness of the issues to be addressed as well as a commitment to address them.  We hope that this resolution can form part of a broad combination of EU regulatory initiatives that can turn Trustworthy AI into more than a slogan.

To find out more about the potential impact of ADM on our human rights, take a look at our work on AI. 

This blog post was written by Siméon de Brouwer, AI policy intern