Govern AI to help people

Why we need human rights impact assessments for AI

Around the world, governments and tech companies alike tout artificial intelligence (AI) and forms of automated decision-making (ADM) as cheap, convenient, and fast fixes for a range of societal challenges – from moderating illegal content on social media or scanning medical images for signs of diseases, to detecting fraud and tracking down tax evaders.

Yet at the same time, scandals over the abuse and misuse of AI systems just keep piling up. Automated content moderation systems, which are constantly presented as the silver-bullet solution to the complex problems of illegal content online, have been shown to be flawed, limited, and prone to dangerous errors. But the dangers aren’t restricted to online spaces. In the Netherlands, tax authorities implemented an algorithm to detect benefits fraud, but in doing so, falsely accused and penalised thousands of people, many of whom were minorities or from low-income backgrounds. These examples beg the question: how can we ensure that uses of AI systems respect or even extend fundamental human rights?

Fortunately, the conversation has shifted away from vague, non-binding ethical guidelines, and various governments are proposing concrete regulations, such as the EU’s proposed AI Act, instead. There is also growing recognition of the need for other human-rights based approaches to AI governance – including the use of human rights impact assessments (HRIAs). In our new report, we explore the role of HRIAs in AI governance, and offer recommendations for how they can be used to ensure that AI systems are developed and deployed in a rights-respecting manner.

While lots of research and reporting has been done on assessing the impact of AI systems, there’s little consensus about what exactly it is we want to assess and how we should do it. As we argue, it’s essential that any form of AI or algorithmic impact assessment integrates the human rights legal framework so that it can unearth potential human rights harms as well as propose effective mitigation strategies, including prohibition or withdrawal of systems, when harms do occur.

However, even when human rights considerations are included in impact assessments for AI systems, the risk remains that without clear government directives, it ends up as little more than a box-ticking exercise, a fake self-assessment, or a toothless list of recommendations. Our report therefore explores existing forms of impact assessments, from data protection impact assessments (DPIAs) to the impact assessment tool in Canada’s 2019 Directive on Automated Decision-Making, and highlights the shortcomings and best practices of these models. 

With more and more jurisdictions mandating impact assessment for AI systems, we have made some key recommendations, including the following:

  • Ensure input by civil society and those impacted, and disclose results: Alongside integrating a human rights framework into impact assessments for AI systems, we demand increased, meaningful, and resourced involvement of civil society and affected groups in organisations empowered to perform assessments and audits, as well as in standardisation bodies, and meaningful public disclosure of the results of assessments and audits.
  • Create mechanisms for oversight if self-assessments fail to protect people: In the context of any self-assessment regimes, we demand the introduction of mechanisms that trigger independent audits and assessments, as well as clear avenues for people affected by AI systems, or groups representing them, to flag harms and thereby trigger investigations by enforcement bodies. 
  • Jointly develop a method for human rights-based AI risk assessment: Working with all relevant stakeholders, authorities should develop a model risk assessment methodology that explicitly addresses the human rights concerns raised by AI systems.

For more detailed analysis and the full set of recommendations, check out the full report. 

After years of failed promises from the AI industry, such as the ever-shifting deadline for fully self-driving cars or for human radiologists by 2021, we’ve learned to be healthily skeptical about its benefits, but we shouldn’t be complacent about its potential risks. As AI systems are integrated into more domains, and they become more powerful, their potential to cause harm grows. New risks will emerge. Mandating human rights impact assessments for these systems is vital to ensure that when we start to see benefits from AI technologies, they will be for people, not just profit margins.