AI Biometrics

No such thing as a “normal” body: new report explores how AI biometrics oppress people

From facial recognition to iris scanning, people’s biometric data is being collected each day and fed into artificial intelligence (AI) systems that influence how people live, and how they are treated by society.

Bodily harms: mapping the risks of emerging biometric tech, a new Access Now publication written by Xiaowei Wang and Shazeda Ahmed from UCLA’s Center on Race and Digital Justice, explores how artificial intelligence-based biometric technology systems are being used to classify, categorize, and control people’s bodies — enabling discrimination and oppression. Read the snapshot and full report, watch the first episode of Access Now’s new video series, How AI is defining our bodies.

While the media focuses on sci-fi scenarios about sentient AI wiping out humanity, the real harms are happening here and now. Flawed biometric systems are used to profile, oppress, and police people. They are undermining people’s rights and are supercharging discrimination, marginalization, and criminalization. Daniel Leufer, Senior Policy Analyst at Access Now


As highlighted by disability justice advocates interviewed for the report, biometric technologies are used to define what constitutes a “normal” body, excluding or categorizing millions of people who exist outside these parameters, often pushing aside people with disabilities. Furthermore, these technologies reproduce existing biases around who is inherently criminalized and discriminated against.

People with disabilities have long worked to create technologies that can open up a more liberatory world for everyone — such as optical character recognition — yet over time many of these technologies have become co-opted not only for profit, but as tools for disciplining bodies that fall outside of an imposed ideal of ‘normal.’ Xiaowei Wang and Shazeda Ahmed from UCLA’s Center on Race and Digital Justice


The new report draws on document analysis and expert interviews to unpack the ableist foundations of biometric systems such as voice recognition, gait analysis, eye tracking, and other forms of invasive data collection, and explore how their development is incentivized and sustained by false panic around “welfare fraud” or “national security.”

The report calls to be wary of technologies that perpetuate ableist promises of “curing” disability, to avoid the use of technology that enacts curative violence (attempting to eradicate a “problem”), and promote assistive technologies. Recommendations for the future governance of AI-based biometric technology systems include:

  • See marginalized people and groups as co-creators of technology, not just “users;”
  • Assess when a biometric technology is used to gate-keep access to benefits — such as fraud detection that has incorrectly denied people living essentials — and re-entrench asymmetrical power dynamics;
  • Be open to banning certain uses of technology when responsible use may not be possible; and
  • Cultivate interdisciplinary research spaces and consortia to address structural impacts before a technology is launched.