Reconocimiento facial

Instead of banning facial recognition, some governments in Latin America want to make it official

There is increasing implementation of facial recognition technology for mass surveillance in Latin America, and things are getting worse. Government leaders in Buenos Aires, Brasilia, and Uruguay are pushing a legal framework to perpetuate this invasive and harmful use of mass surveillance tools. Civil society must fight back.

Acting without people’s knowledge — let alone their ability to consent — police and other local authorities in Buenos Aires, Brasilia, and Uruguay are increasingly using facial recognition technology for mass surveillance, identifying people’s faces and checking them against a database of suspects. Everyone surveilled in this way is effectively treated as a criminal, without having done anything wrong. Now, even as the global movement behind banning facial recognition gains steam, these governments are working to make these privacy-invasive practices official. Unless that changes, we will see a deeper “chilling effect” that limits people’s exercise of freedom of movement, association, and protest. 

How governments are retroactively authorizing mass facial recognition

Let’s go chronologically. On October 22, 2020, the local assembly in Buenos Aires approved an amendment to the Law Nº 5688 – Integral public security system of Buenos Aires. The purpose of the amendment was to create provisions covering the use of facial recognition cameras for public security purposes. However, authorities in Buenos Aires had already been using these invasive systems since 2019 — despite reports to show they lack efficacy and violate people’s due process rights. These systems have misidentified people, who were then detained for hours/days in the police department. After human rights groups spoke out, the United Nations special rapporteur for privacy expressed deep concerns, warning against implementation of the technology in this city. 

Likewise, on November 11, 2020 the local government of Brasilia enacted the Law Nº 6.712 that allows the use of facial recognition technology for public security purposes. Once again, the government had already implemented this type of system, specifically for the carnivals earlier this year. While local authorities bragged about a decrease in crime at the carnival, they failed to show how implementation of facial recognition cameras helped, including in comparison to the other measures they took. 

Finally, in Uruguay, the Senate recently approved the Budget Bill 2020- 2024. Like any other budget bill, it covers a broad range of government programs. But of particular relevance here is its creation of a facial identification database for public security purposes, and its authorization of the migration of all citizens’ data, including biometrics, from the Public Registry to the Interior Ministry (the police). Uruguay’s government had already bought an expensive facial recognition system earlier this year. 

How mass facial recognition systems work and why they violate human rights laws and norms

There’s a growing effort around the world to limit or ban the use of facial recognition systems that enable or contribute to mass surveillance or otherwise violate human rights. Not only are there court cases to challenge the legality of these systems, human rights activists have succeeded in getting bans passed in some cities and states in the United States, campaigns for these bans are intensifying in Europe, and in Morocco, the data protection authority has issued a moratorium. There is a good reason for all of this: these systems run counter to internationally recognized human rights. 

Facial recognition is an umbrella term that describes a number of different techniques. There is some variation in how we refer to them, but experts typically make a distinction between facial authentication, identification, and classification. Authentication involves 1-1 matching, such as when airport security checks to see if the person standing in front of the gate matches the ID card they are holding. Identification involves 1-many matching, such as when police analyze video footage to match people against a database of criminal suspects. Classification entails analysis of faces to classify them according to some (often controversial) criteria, such as gender or ethnicity, or to make inferences about their emotions of personalities. In Buenos Aires, Brasilia, and Uruguay, authorities want to be able to use techniques like this on citizens — regardless of whether they are criminals — for public security purposes. 

What this means in practice is the police and other authorities would use facial recognition to detect your face in an image — usually a live camera image — and then match that image against an existing database of faces (typically, of criminal suspects). The result is a mass surveillance system that treats ordinary people as suspects, and violates their rights to privacy and data protection. As we note above, when people know they are being watched, and could be identified — or misidentified — by police, they no longer feel free to exercise a broad array of rights, including the freedom of movement, association, and assembly. 

These laws to authorize facial recognition raise other concerns as well. 

  • The laws are open for misinterpretation and abuse. All three seek to authorize facial recognition for the vague purpose of “public security.” None provide a detailed description of what public security means, leaving the concept vague and broad. This opens the door for police to make arbitrary decisions justified on the basis of public security.
  • The systems may implicate data protection rights. In the case of Brasilia and Uruguay, public security is an exception in the application of the data protection law. In Buenos Aires, however, there are local and federal data protection laws. But in the context of ensuring public security, the police or the authorities can deny the user rights of access, rectification, update, and suppression.

Since there is a carve-out in data protection laws for public security in Brasilia and Uruguay and Buenos Aires can restrict data protection rights, it is necessary to talk about the specific rules for processing of biometric data for facial recognition. 

The Buenos Aires and Brasilia laws set a cap on the time for data storage — 60 days and 5 years respectively — with some exceptions. Regarding data sharing, the three have different rules, the Argentinian one being the most protective. 

The law in Buenos Aires calls for confidentiality (people accessing the database cannot disclose information) and an annual review of how the technology is applied. In Brasilia, meanwhile, the law demands the corroboration of a public official when a person is identified using facial recognition systems, and requires that authorities use a banner or warning next to a security camera to notify people about the existence of the facial recognition system, for information purposes.

  • The systems may entail analysis of people’s movements. In Buenos Aires, the law allows an additional feature that can analyze movements/behavior, and in Brasilia, the law allows the tracking of a person’s movements for up to 72 hours. Not only is such tracking more invasive than momentary identification, the idea that systems can “detect suspicious behavior” by tracking movements is highly contentious and could lead to significant abuses.
What can we learn from this? We need to speak out 

There are deep flaws in all three laws. It is already a huge mistake to implement a system that threatens human rights. Creating a legislative framework to retroactively authorize them is worse, putting lipstick on a pig. 

There are some applications of technology that are so dangerous for human rights, no remedy or safeguards can prevent or mitigate the negative impact on individuals and society — and they must therefore be banned. Facial recognition for mass surveillance is among these applications. 

The technology is far from perfect and has been repeatedly criticized due to its lack of efficacy and capacity to entrench discrimination against certain ethnic groups. Communities in Latin America are diverse, and any use of this technology here requires a deep human rights audit and full and transparent discussion of its limitations. 

We are deeply disappointed to see the debate on implementation and legislation for facial recognition systems lack transparency, and provide no space for human rights assessments. Jointly with our partner organizations, we have closely followed the debate in Buenos Aires  (here and here). Sadly, legislators chose not to discuss the bill in the Human Rights Commission. We also worked with other organizations to send a letter to the Uruguay Senate, asking them to remove the problematic articles in Uruguay’s Budget Bill and start a separate discussion on facial recognition technology — but that didn’t happen. 

The fact that authorities are pushing for uses of facial recognition that run contrary to human rights is a sign that more of us need to speak out and make our voice be heard on this issue. We cannot ignore the authorization of a use of technology that so egregiously undermines our rights. We encourage you to follow civil society organizations working on this issue — Laboratorio de Datos y Sociedad (DATYSOC), Via Libre, Asociación por los Derechos Civiles (ADC), Brazilian Institute of Consumer Protection (IDEC), Laboratory of Public Policy and Internet (LAPIN), and others — and contact your representative to demand respect for your rights. If we work together, we may be able to reverse this dangerous trend.


Sigue las cuentas de Access Now para Latinoamérica:

Twitter  |  Instagram