|Reconocimiento facial

Facial recognition on trial: emotion and gender “detection” under scrutiny in a court case in Brazil

We are seeing AI-powered facial recognition systems deployed in increasingly sensitive environments around the world, and often without people’s knowledge or consent. It is critical to ensure these systems do not facilitate human rights violations, in particular of people and communities who are already at risk. When they are put in place on the basis of pseudoscientific claims that enable discriminatory inferences, it hurts human rights and destroys public trust.

The Brazilian Institute of Consumer Protection (IDEC) has filed a public civil action against São Paulo Metro operator, ViaQuatro, regarding the installation and use of an AI crowd analytics system from AdMobilize that claims to predict the emotion, age, and gender of metro passengers without processing personal data. Access Now filed an expert opinion criticizing those claims. 

The IDEC vs. ViaQuatro case is important for the future of human rights in Brazil, and a positive outcome would help set a legal precedent to prevent facial recognition surveillance based on spurious claims that would enable human rights violations. Below, we explore the details of our expert opinion, identify some of the harms of systems like the one in Brazil for trans and non-binary people, and explain why this case has global import.

Using facial recognition to serve ads in the São Paulo Metro

In April, 2018, ViaQuatro, the concessionary of the São Paulo public metro system, announced the installation of the Digital Interactive Doors System (the DID system), developed by AdMobilize, in the yellow metro line. The DID system consisted of interactive doors on the metro platform that displayed ads. The doors were equipped with cameras that, according to ViaQuatro, allowed the DID system to recognize human faces and detect the emotion, gender, and age of passersby who look at the advertising panels in order to tailor the ads displayed to the audience. ViaQuatro justified its implementation by claiming that it would serve as a platform to share information.

In August, 2018 The Brazilian Institute of Consumer Protection (IDEC) filed a public civil action against ViaQuatro because it violated the consumer and personal data legislation. Two weeks later, the judge ruled on a precautionary measure and asked ViaQuatro to stop collecting data and to remove the cameras. ViaQuatro complied with the order, while the case continued.

Access Now submitted an expert opinion to this case in which we counter a number of misleading claims made by ViaQuatro and their experts about the DID system. We also point out that the “crowd analytics” performed by the DID system are based on flawed scientific theories, in the case of emotion detection, and violate the rights of trans and non-binary people, in the case of gender detection. Here we’ll take a look at some of the main points from our expert opinion.

Can you do facial detection without processing personal data?

One misleading claim that ViaQuatro and their experts made about the DID system is that it performs facial detection, rather than facial recognition, and that it does this without processing unique biometric information. To see why this makes no sense, let’s start by getting some concepts clear.

Facial recognition technology is normally used as an umbrella term which encompasses a range of technological processes, including facial detection, verification (1-1 matching), identification (1-many matching), and classification/analysis (making inferences about what type of face we see). 

Despite what ViaQuatro claimed, their system does not “just use facial detection,” but in fact uses two types of facial recognition: first, facial detection to figure out whether there are human faces in the images captured by the camera, and second, facial classification/analysis to make inferences about the age, gender, and emotion of those faces. 

Is it possible to do these two types of facial recognition without using personal data and using only anonymous data? No. 

Both of these facial recognition processes require the collection and processing of biometric data, which the Article 29 Working Party define as: “biological properties, behavioural aspects, physiological characteristics, living traits or repeatable actions where those features and/or actions are both unique to that individual and measurable.”

Biometric systems typically have three stages of processing: enrollment, storage, and matching. In the enrollment stage, biometric data are collected. In the case of the DID system, it captures and processes images of the faces of passersby and detects whether there is a face in that image.

To claim that such a system only deals with anonymous data is entirely misleading. At the initial stage of processing, the DID system collects and processes raw images of metro users’ faces, i.e. their unique biometric information. Although anonymization or aggregation may occur after the initial process of facial detection, the fact remains that the system has already collected and processed metro users’ biometric data in the enrollment stage. 

To make matters worse, metro users were not given an opportunity to opt out, had not consented to the collection of these data, and could not consent in the first place since using public transport is essential in everyday life and the passengers would not have a viable alternative.

Emotions aren’t the same as facial configurations

We also looked at the claim that the DID system can “detect” the emotions of metro users. We point to a recent meta-analysis by Lisa Feldman Barrett et al. of the evidence for such claims, which investigated whether emotions can be reliably predicted from facial analysis. Barrett et al. discuss systems, such as the DID system, from “[t]echnology companies [who] are investing tremendous resources to figure out how to objectively “read” emotions in people by detecting their presumed facial expressions.” However, the conclusion of this study was that “the science of emotion is ill-equipped to support any of these initiatives.” 

There is no scientific basis to claim that systems like the DID can “perceive” or “detect” the emotions of a person from images of their face. On a basic level, there is no simple one-to-one correlation between facial configurations (such as smiles or frowns) and emotions; people often smile for other reasons than because they are happy, or express happiness by other facial configurations than a smile. 

Another problem with the approach underlying the DID system, which is based on the controversial “basic emotions” view, is that it doesn’t and can’t take into account context and culture. Proponents of this basic emotions view claim that these basic facial configurations/movements are prototypes for emotional expression with universal validity. By contrast, Barrett et al. demonstrate that these configurations are “best thought of as Western gestures, symbols, or stereotypes that fail to capture the rich variety with which people spontaneously move their faces to express emotions in everyday life.”

What all this means is that the inferences made about the emotions of metro users are not scientifically valid. Their biometric data is being collected, stored, and processed in order to make unscientific inferences about their private emotional life.

Automatic gender detection harms trans and non-binary people

We also outline some serious concerns with the gender detection technology used by the DID system, which is a form of Automatic Gender Recognition (AGR). We look at two major problems with AGR: first, it conflates gender with biological sex and assumes that gender can be determined from the physiological characteristics of a person’s face; second, it assigns gender based on a binary conception of gender (either male or female). Both of these things harm trans and non-binary people. 

On the first point, we argue that determining gender based on physiological (and in this case physiognomic) characteristics results in the systematic misgendering of trans people who have a gender identity which differs from their biological sex at birth. As Os Keyes says in their article, The Misgendering Machines, “the assumption that sex dictates gender—in other words, that it mandates social roles, combinations of behaviours and traits and aspects of presentation and identity—fails to capture the existence of transgender (trans) people, whose genders do not match their assigned sex.”

On the second point, the fact the DID system only assigns gender based on a male-female binary denies the existence of non-binary individuals who do not conform to this binary. AGR systems, such as the one used by AdMobilize, either fail to classify trans and non-binary people as either male or female (they are thus excluded), or they misgender them, by assigning them a gender which does not match what they themselves have chosen as their gender. 

Rather than “detecting” gender, this technology forcibly assigns gender and in doing so it undermines the ability of trans and non-binary people to self-determination and violates their human dignity. These harms are perpetuated solely for the purpose of serving advertisements to people, which cannot be considered a proportionate aim for such a risk of harm.

The global impact of IDEC vs. ViaQuatro 

The case IDEC vs. Via Quatro is its final days and the ruling will have an impact not just on metro users in São Paulo, but globally, too. To date, just one case in 2018 in the Netherlands has stated this kind of facial analysis/categorization should be considered personal data processing and demanded the data processor comply with the General Data Protection Regulation.

As AI systems are being deployed in increasingly sensitive environments, we need assurance that these systems do not violate the rights of users and non-users who are impacted by them. Systems which misrepresent their functionality and which make pseudoscientific and discriminatory inferences cannot be deployed without undermining public trust.

In IDEC vs. ViaQuatro, we need a decision that protects and advances human rights to help set a precedent that can serve us in future during ongoing surveillance and invasive situations. 

Access Now’s expert opinion follows our call for a ban on biometric data processing that enables or amounts to mass surveillance in public spaces, and our commitment to holding the private sector accountable for violating human rights.

For more updates on this case, do follow IDEC, and/or subscribe to our Access Now Express newsletter.