|

Computers are binary, people are not: how AI systems undermine LGBTQ identity

Most of us interact with some sort of Artificial Intelligence (AI) system several times a day, whether it’s using the predictive text function on our phones or applying a selfie filter on Instagram or Snapchat.

Some AI-powered systems do useful things that help us, like optimize electricity grids. Others capture our most sensitive personal information — your voice, your face shape, your skin color, the way you walk — and use it to make inferences about who you are.

Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even “predict” who will violate the law in the future — there are serious and sometimes fatal consequences.

Civil society organizations, activists, and researchers around the world are pushing back. In Europe, Access Now is part of the Reclaim Your Face campaign which has launched a formal petition to ban biometric mass surveillance in the European Union, and we joined 61 leading NGOs asking lawmakers for prohibitions or red lines on applications of AI that are incompatible with human rights.

Some threats posed by AI aren’t obvious as others. It doesn’t help when companies like Spotify quietly develop AI systems based on highly sensitive biometrics without showing how they plan to protect human rights.

This week we’re launching a new campaign with All Out, a global LGBT+ organization, and with the support of Reclaim Your Face and the researcher Os Keyes, to expose the threat of automated gender “recognition” and AI systems to predict sexual orientation. Our message: these systems are dangerous for LGBTQ people around the world, and they should be banned. Here’s why.

How AI can automate LGBTQ oppression

Let’s start with automated gender recognition, or AGR. User interfaces that require people to input information about their gender are everywhere. Thankfully, in some cases, we’ve seen attempts to reshape user interface design to give people more agency in defining their own gender identity beyond a simplistic male/female or man/woman binary, giving people a broader selection of labels or, better yet, allowing them to freely input a label that best captures their gender identity.

AGR does the opposite. It removes your opportunity to self-identify, and instead infers your gender from data collected about you. This technology uses information such as your legal name, whether or not you wear makeup, or the shape of your jawline or cheekbones, to reduce your gender identity to a simplistic binary. Not only does this fail to reflect any objective or scientific understanding of gender, it represents a form of erasure for people who are trans or non-binary. This erasure, systematically and technically reinforced, has real-world consequences. When you and your community are not represented, you lose the ability to advocate effectively for your fundamental rights and freedoms. That can affect everything from housing to employment to healthcare.

AGR systems are already sparking controversy for enabling the exclusion of trans and gender non-conforming people. Take Giggle, the “girls only” social networking app. In order to enforce its girls-only policy, the company demands that users upload a selfie to register. Giggle then uses third-party facial recognition technology that it claims “determines the likelihood of the subject being female at a stated confidence level.”

Leaving aside other issues with the app, numerous studies and audits have shown that facial-recognition based AGR technology is not accurate for many people. In their groundbreaking study, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Joy Buolamwini and Timnit Gebru found that the AGR systems deployed by prominent companies had higher error rates for women than men, and the failure rate was even higher for dark-skinned women. Unfortunately, making facial recognition more accurate would not diminish its discriminatory impact in many other contexts — and as we’ll show, it’s highly unlikely any adjustments would make it any less harmful for trans and non-binary people.

How misgendering machines harm people

Research shows that AGR technology based on facial recognition is almost guaranteed to misgender trans people and inherently discriminates against non-binary people. As Os Keyes explains in their paper, The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition, approaches to AGR are typically based on a male-female gender binary and derive gender from physical traits; this means that trans people are often misgendered, while non-binary people are forced into a binary that undermines their gender identities.

How exactly does this impact their rights and freedoms? Our own Daniel Leufer joined Keyes and other experts to explore that question at a panel discussion at this year’s Computers, Privacy, and Data Protection (CPDP) conference, Automated Gender Attribution: It’s a Boy, It’s Girl! Said the Algorithm.

Watch the panel discussion here


Keyes’ analysis was further strengthened by Morgan Klaus Scheuerman et al. in their paper, How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services. Scheuerman et al. analyzed 10 commercially available AGR services (from Amazon, IBM, Microsoft, and others) and found that they “performed consistently worse on transgender individuals and were universally unable to classify non-binary genders.”

Misgendering hurts people. As Scheuerman et al. point out, many trans people already struggle with gender dysphoria, and AGR systems embedded in everyday life are likely to exacerbate “the emotional distress associated with an individual’s experience with their gendered body or social experiences.”

Consider an advertising billboard that “detects” your gender and switches from advertising power tools for “men” to summer dresses for “women.” Not only does this reinforce outdated gender-based stereotypes, as Keyes notes, “a trans man who sees a billboard flicker to advertise dresses to him as he approaches is, even if he likes dresses, unlikely to feel particularly good about it.” It would be even worse if AGR systems gain traction in the public sector and are used to control access to public toilets or other gendered spaces, serving to exclude trans people and others who are misclassified by these systems — including in settings like government buildings, hospitals, or vaccination centers where they are seeking access to essential services.

Systems for “gendering” people in public spaces are not hypothetical; they are already being deployed around the world. In São Paulo in Brazil, the Brazilian Institute of Consumer Protection (IDEC) filed a public civil action against ViaQuatro, a Metro operator, to challenge the installation and use of smart billboards that claim to predict the emotion, age, and gender of metro passengers to serve them “better ads.” Access Now filed an expert opinion in this case to criticize those claims and highlight the dangers of AGR.

New research from Coding Rights and Privacy International has also mapped the use of AGR to authenticate IDs to access public services in Brazil, including a survey where they found that “90.5% of trans people responded that they believe facial recognition can operate from a transphobic perspective; 95.2% had the impression that this technology can leave them vulnerable to situations of embarrassment and contribute to the stigmatization of trans people.”

Our private spaces are also being opened to systems that purport to detect gender. Spotify was recently granted a speech-recognition patent for a system that claims to detect, among other things, your “emotional state, gender, age, or accent” to recommend music. On April 2, we sent a letter to Spotify calling on the company to abandon the technology.

The Spotify patent is part of a larger trend of “invisible” uses of AGR with harms that are not necessarily obvious. On a deeper level, literally on the level of technical infrastructures, AI systems that categorize us by gender impose a narrow, technical conception of our identity on top of our social identity. Scheuerman et al. talk about how this forces us into a new “algorithmic identity” — one that “blurs the social and the technical perspectives of identity, calcifying social identities into fixed, technical infrastructures.”

One example of AGR that goes unnoticed in daily life is its use in computer cloud-based vision services. Companies that offer these services integrate AGR as a basic feature, and the ones that don’t, like Google, still provide some form of image-labeling service that applies gendered labels to pictures. This creates an incentive for developers to incorporate flawed AGR into apps on top of these services, which in turn cements the use of a reductive, discriminatory technology that digitally erases trans and non-binary people, and normalizes that erasure.

Data-harvesting tech companies already impose on us an over-personalization of content and micro-targeted ads via privacy-invasive techniques. This is often portrayed as a positive development, even if we never asked for it and cannot control it. AGR is yet another step on that path, allowing companies to double-down on their harmful business models. After removing our ability to control what we see online, they are now removing our agency to control and affirm our identities.

The threat of “AI Gaydar”

In addition to AGR, our campaign also opposes the use of AI systems to “detect,” or more correctly, infer, people’s sexual orientation. This use case got a lot of deservedly negative publicity when Stanford professor Michal Kosinski, a controversial figure whose earlier research was linked to the Cambridge Analytica scandal, published a paper outlining an “AI Gaydar”: a machine learning system that he and his co-author claimed could accurately predict your sexual orientation from a picture of your face, and “correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.”

These claims were clearly overblown and misleading, and there were serious flaws with the data used and the set up of the experiment. The paper received harsh criticism, both for misrepresenting its accuracy claims and, more seriously, for its revival of dangerous pseudoscience that uses people’s physical features to “determine” essential aspects of their personhood.

In the paper Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities, Nenad Tomasev et al. note that such systems “threaten to reinforce biological essentialist views of sexual orientation and echo tenets of eugenics—a historical framework that leveraged science and technology to justify individual and structural violence against people perceived as inferior.”

Even though Kosinski’s “AI Gaydar” turned out to be scientific rubbish, researchers and companies are persisting in attempts to use our biometric data to develop systems to make inferences about complex aspects of our identities. Researchers published a paper in July 2020 on using the body mass index (BMI) of politicians as a predictor of political corruption, hypothesizing that overweight politicians were more corrupt. Even more alarming is a paper published in 2020 entitled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” in which the authors claim to be able to predict “criminality” by analyzing people’s faces.

The latter paper inspired over 1,000 AI experts to sign a letter condemning the research and outlining the “ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks.” As they further noted, “there is no way to develop a system that can predict or identify ‘criminality’ that is not racially biased — because the category of ‘criminality’ itself is racially biased.”

Going forward: how to safeguard LGBTQ rights in the age of AI

The threats to the rights of LGBTQ people and communities go beyond the examples we have cited. The paper we mentioned by Nenad Tomasev et al. explores the negative and positive impacts of a wide range of applications of AI, and urges people to use an intersectional lens when they analyze impact, including notions of economic and racial justice.

The good news is that we can take action — not only to reduce the harms caused by AI systems to LGBTQ communities, but also to make sure that AI systems protect and enforce their rights. As we explain in our campaign with All Out, some applications of AI must be banned outright, such as AGR and AI-based “detection” of sexual orientation. These systems cannot be fixed by simply introducing more diverse training data, increasing accuracy, or applying technical methods to reduce bias; the fundamental aim of these systems is incompatible with our rights. They actively undermine years of work in the struggle for gender justice and LGBTQ rights. In these cases, we need lawmakers to take bold action to safeguard our rights.

But companies, programmers, and designers also have a role to play here. When creating AI systems and user interfaces, they can make important decisions about what features to build, and how to best include and empower people with diverse identities. That is an obligation that needs to be taken seriously, because as Deborah Raji pointed out in a recent piece on how data encodes systematic racism, “[t]he fact is that AI doesn’t work until it works for all of us.” That’s why we’re calling on everyone involved in the development and deployment of these systems to do everything they can to make sure they protect and empower people. Otherwise the “AI-driven” future we are being sold will just replicate, cement, and worsen injustice.