Bodily harms: how AI and biometrics curtail human rights

Bodily harms: how AI and biometrics curtail human rights

Iris scanning. Voice recognition. Brain implants. Biometric technologies are no longer simply the stuff of science-fiction or spy movies; they are here and they are harming the most marginalized people. A new Access Now publication, written by Xiaowei Wang and Shazeda Ahmed from UCLA’s Center on Race and Digital Justice and the ELISAVA School of Design and Engineering, explores how AIbased biometric tech systems are being used to classify, categorize, and control our bodies, perpetuating discrimination in the process. 

Put simply, biometric data is information about physical or behavioral characteristics that are generally unique to an individual. Physical biometric data might include someone’s facial features, fingerprints, or iris patterns, while behavioral biometric data may include gait, signature, or voice patterns. All of this data can be input into biometric systems, which use artificial intelligence (AI) – i.e. “machine learning” algorithms – to make predictions about people. For example, such algorithms can extract a biometric template from an image of a person’s fingerprint and match it against others in a database in order to identify that person. Biometrics systems range in complexity and sophistication, from widely available “low-end” tech such as video cameras and voice recorders, to more restricted “high-end” tools that allow for capturing and reading brain wave data, for instance. 

While techniques such as fingerprinting or gait analysis aren’t new, they and a whole range of other biometric technologies have been supercharged by the increased sophistication of machine learning algorithms and availability of cheap hardware and infrastructure. They can now be found everywhere from systems used to monitor people designated as criminals, to gate-keep refugees’ access to food to other basic services, and to commodify people’s emotions. Since the COVID-19 pandemic, we’ve seen a particular rise in such technologies’ use in the medical field, with proponents claiming they can do everything from diagnose cardiovascular or Parkinson’s disease, to “treat” autism. 

Despite extraordinary claims from their creators, many of these AI models are far less accurate and reliable than they claim to be. At best, they lack any scientific basis and at worst, they put a shiny “AI” veneer onto racist pseudoscience like phrenology. In addition, AI-based biometric technologies are particularly prone to “function creep,” meaning that, even if they are developed for one, seemingly innocuous, application, they can easily be repurposed for a far more nefarious use. For example, a biometric voice recognition system that claims to detect mental distress or anxiety markers could be repurposed for an “AI lie detector” used by law enforcement at the border, while an eye tracking function used in VR gaming might be deployed to enable surveillance, tracking, or profiling.

One of the main concerns identified in Access Now’s latest report is how biometric technologies are used to define what constitutes a “normal” body. All bodies are subsequently expected to fit into that single template, and anyone whose body does not fit the norm is excluded. In many cases, the people pushed aside are disabled, and often already marginalized. To make matters worse, this is happening even as AI and biometric tech manufacturers claim that their tools’ potential to “cure” specific health conditions or disabilities mean they perform an essential social good and should therefore be regulated more loosely. 

As disability justice advocates have pointed out, this is an inherently ableist approach; not least because it starts from the already flawed premise that someone’s disability is something that needs to be fixed. In addition, the creators of such technologies cast disabled people as subjects of research and innovation to be exploited for profit and used as shields from regulatory scrutiny, rather than active designers of assistive technology that will actually serve them.

It is vital that researchers and tech companies move beyond seeing marginalized groups as mere users of their tools, and start seeing them as builders of the technology itself, incorporating them into design and decision-making, rather than paying lip-service to “consultation” at the last minute. Policymakers can support this by creating regulatory frameworks that allow, or mandate, shared governance around the building, deployment, and even banning of certain biometric technologies that are incompatible with human rights. Systematizing stakeholder engagement early on may also help mitigate scenarios where AI-based biometric technologies inadvertently or deliberately deepen inequities or power imbalances, which we’ve already seen happen with automated decision-making

As our report authors point out, “the global tech industry operates on the premise that the fundamental issues with biometrics are settled.” But their work shows this far from the case, and the current and emerging state of biometric technologies is far from “neutral, fair, and scientifically irreproachable.” Before making any more promises for a better world, biometric tech companies must urgently address the serious bodily harm being perpetrated by their products.