ITFlows artificial intelligence

Access Now applauds U.S. Blueprint for AI Bill of Rights, but more safeguards needed

Access Now welcomes the U.S. White House Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights and accompanying Fact Sheet announcing agency actions to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so they protect the rights of the public.

“The AI Bill of Rights could have a monumental impact on fundamental civil liberties for Black and Latinx people across the nation, but conspicuously omits safeguards against other discriminatory impacts of AI systems that can exclude and vilify particular groups of people across the country,” said Willmary Escoto, U.S. Data Protection Lead at Access Now, who was present at the Blueprint launch. “The framework highlights the importance of data minimization, which Access Now steadily advocates for, while naming and addressing the diverse harms people experience from other AI-enabled technologies, like so-called emotion recognition.”

The Blueprint notably does not address the discriminatory impacts that AI systems have on non-U.S. citizens, and people migrating, displaced, and seeking asylum or refuge, who face particular obstacles of algorithmic exclusion and vilification. This must be immediately rectified — it is vital that any AI Bill of Rights safeguards the rights of all.

“AI systems developed to exploit our private thoughts and feelings, and which are based on scientifically shaky assumptions, represent one of the greatest threats to the U.S.’ desire to foster public trust and confidence in AI technologies and protect civil liberties,” added Willmary Escoto, U.S. Data Protection Lead at Access Now.

The Blueprint is the culmination of a year-long process to “make sure new and emerging data-driven technologies abide by the enduring values of American democracy.” Access Now has engaged with the OSTP throughout this process, including through listening sessions, requests for comments, and private meetings to discuss the harms of emotion recognition technology. 

The Blueprint emphasizes five protections to which everyone in the U.S. should be entitled:

  1. Protection from unsafe and ineffective automated systems;
  2. Algorithmic discrimination protections;
  3. Protection from abusive data practices;
  4. Notice and explanation of when an automated system is used; and 
  5. Alternative options to op-out and access to remedies.

The accompanying  Fact Sheet  provides examples and concrete steps for communities, industry, governments, and others to take in order to build these key protections into policy, practice, and the technological design process.