From facial recognition to iris scanning, people’s biometric data is being collected each day and fed into artificial intelligence (AI) systems that influence how people live, and how they are treated by society.
Bodily harms: mapping the risks of emerging biometric tech, a new Access Now publication written by Xiaowei Wang and Shazeda Ahmed from UCLA’s Center on Race and Digital Justice, explores how artificial intelligence-based biometric technology systems are being used to classify, categorize, and control people’s bodies — enabling discrimination and oppression. Read the snapshot and full report, watch the first episode of Access Now’s new video series, How AI is defining our bodies.
As highlighted by disability justice advocates interviewed for the report, biometric technologies are used to define what constitutes a “normal” body, excluding or categorizing millions of people who exist outside these parameters, often pushing aside people with disabilities. Furthermore, these technologies reproduce existing biases around who is inherently criminalized and discriminated against.
The new report draws on document analysis and expert interviews to unpack the ableist foundations of biometric systems such as voice recognition, gait analysis, eye tracking, and other forms of invasive data collection, and explore how their development is incentivized and sustained by false panic around “welfare fraud” or “national security.”
The report calls to be wary of technologies that perpetuate ableist promises of “curing” disability, to avoid the use of technology that enacts curative violence (attempting to eradicate a “problem”), and promote assistive technologies. Recommendations for the future governance of AI-based biometric technology systems include:
- See marginalized people and groups as co-creators of technology, not just “users;”
- Assess when a biometric technology is used to gate-keep access to benefits — such as fraud detection that has incorrectly denied people living essentials — and re-entrench asymmetrical power dynamics;
- Be open to banning certain uses of technology when responsible use may not be possible; and
- Cultivate interdisciplinary research spaces and consortia to address structural impacts before a technology is launched.