AI

A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act

Dear Members of the European Parliament,
Dear Representatives of Member States of the European Union,
Dear Executive Vice-President Virkkunen,
Dear Commissioner McGrath,

We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act. Removing it will have no substantial positive impact, will lead to complication rather than simplification, and will drastically undermine the enforceability of the AI Act, undermine the functioning of the Single Market, and create unacceptable risks for health, safety, and fundamental rights. 

We therefore urge you to reject paragraphs 6, 14, and 32 of the AI Omnibus, thereby restoring the Article 49(2) transparency safeguard.

The Article 49(2) transparency safeguard has an essential function and removing it, as proposed in the Commission’s AI Omnibus, will create a gaping loophole and undermine the core functioning of the AI Act

Under Article 6(3), providers of AI systems which match the list of high-risk use cases in Annex III may decide that their system does not in fact pose a significant risk and unilaterally exempt themselves from all obligations for high-risk AI systems

To stop the abuse of this derogation mechanism, providers who do exempt themselves are required by the Article 49(2) transparency safeguard to register their derogation in a publicly viewable database. Removing this transparency safeguard would have three key negative consequences:

  1. Market surveillance authorities will have no overview of how many companies exempt themselves from the high-risk requirements, and we have no way of tracking discrepancies across member states (e.g. that in Country A there were 3000 exemptions but only 6 in Country B), leading to potential lack of harmonisation across the Single Market.
  2. Providers are given a completely opaque and unaccountable way to opt out of the obligations for high-risk AI systems, creating a perverse incentive to sidestep the requirements of the AI Act. Importantly, this perverse incentive will work to the detriment of responsible providers who truly wish to develop responsible, trustworthy systems in the high-risk categories, allowing them to be undercut in the market.
  3. The public, including civil society organisations, will have no way of knowing which providers have exempted themselves from obligations, despite the fact that their systems fall under the high-risk categories in Annex III. This removes a key element of transparency, undermines public trust, and deprives those affected by AI systems of necessary information to challenge an exemption.

Given the serious negative consequences of removing the Article 49(2) transparency safeguard, it should be expected that the Commission has a strong argument for the positive impact of its removal. By contrast, according to the Commission’s Staff Working Document accompanying the AI Omnibus, the Commission asserts that removing this registration obligation would save, on average, 100 EUR: “The obligation to register AI systems in the EU high-risk database involves inputting into the online database some information which is readily available to the provider of an AI system. No more than2.5 working hours should be required on average […][h]ence, the costs would be EUR 100 per company.

Saving 100 EUR per company is severely  disproportionate to the detrimental impact caused by removing the Article 49(2) transparency safeguard and is not in line with the Commission’s claim that the ‘targeted simplification measures’ in the AI Omnibus do “not go beyond what is necessary to achieve the objectives of simplification and burden reduction without lowering the protection of health, safety and fundamental rights.” An extra 100 EUR in the pockets of companies will not improve the EU’s competitiveness, it will only undermine the core of the AI Act and risk turning it into a piece of optional self-regulation.

We therefore urge you in the strongest possible terms to reject the changes proposed in paragraphs 6, 14, and 32 of the AI Omnibus and thereby restore the Article 49(2) transparency safeguard.

Signatories

  • Access Now
  • European Digital Rights (EDRi)
  • 5Rights Foundation
  • AI Forensics
  • AK Europa
  • AlgorithmWatch
  • Alternatif Bilisim
  • Amnesty Tech
  • ARTICLE 19
  • Asociația pentru Tehnologie și Internet
  • Association pour la taxation des transactions financières et pour l’action citoyenne (Attac)
  • BEUC – European Consumer Organisation
  • Bits of Freedom
  • Centre for Democracy and Technology Europe
  • Coalition for Independent Technology Research
  • Corporate Europe Observatory (CEO)
  • Danes je nov dan, Inštitut za druga vprašanja
  • Defend Democracy
  • Democratic Society
  • Deutsche Vereinigung für Datenschutz e.V. (DVD)
  • Die Bürokratiemonster
  • Digitalcourage e. V.
  • Digitale Gesellschaft (Germany)
  • Digitale Gesellschaft (Schweiz)
  • Electronic Frontier Norway
  • epicenter.works
  • European Center for Not-for-Profit Law (ECNL)
  • European Civic Forum
  • European Council of Autistic People
  • European Disability Forum (EDF)
  • European Environmental Bureau
  • European Network Against Arms trade (ENAAT)
  • European Public Service Union (EPSU)
  • Fix the Status Quo
  • Gong
  • Homo Digitalis
  • Human Development Research Initiative (HDRI)
  • Investor Alliance for Human Rights
  • Irish Council for Civil Liberties
  • IT-Pol Denmark
  • La Quadrature du Net
  • Lafede – justícia global
  • Likestillings- og diskrimineringsombudet / Equality and Anti-Discrimination Ombud (Norway)
  • Metamorphosis Foundation for Internet and Society
  • Panoptykon Foundation
  • People vs Big Tech
  • Politiscope
  • Stichting Health Action International (HAI)
  • The Good Lobby
  • Weaving Liberation
  • Wikimedia Deutschland e.V.
  • younion _ Die Daseinsgewerkschaft
  • Dr. Gianclaudio Malgieri, eLaw Leiden University
  • Dr Abeba Birhane, AI Accountability Lab (AIAL), Trinity College Dublin
  • Dr. Harshvardhan Pandit, AI Accountability Lab (AIAL), Trinity College Dublin
  • Maribeth Rauh, AI Accountability Lab (AIAL), Trinity College Dublin
  • Dr. Zeerak Talat, University of Edinburgh
  • LK Seiling, Weizenbaum Institut / DSA40 Data Access Collaboratory
  • Dr. Laura Caroli
  • Dr. Aida Ponce Del Castillo