UN Global Digital Compact

Why human rights must be at the core of AI governance

Since the 2022 launch of OpenAI’s ChatGPT, the world has been consumed by hype around so-called generative artificial intelligence (AI), with regulators now scrambling to rein in its harms and excesses. But this isn’t anything new. Already in 2018, when advances in machine learning led to a boom in AI enthusiasm, we saw a similar, simultaneous explosion of toothless “AI ethics guidelines” and other voluntary self-regulatory proposals. At the time, Access Now and others called for enforceable, human rights-based legal instruments, launching The Toronto Declaration at RightsCon 2018, pushing for respect for the right to equality and non-discrimination in machine learning systems. Since then, digital rights organizations have been consistently advocating for human rights-centric AI governance. But this effort is under threat from several angles; and with it, any chance of achieving a reality in which AI development serves people, instead of surveilling and exploiting them. 

Six years on, the casual observer could be forgiven for thinking that the question of AI governance is a settled one. After all, this year alone has seen the adoption of the EU’s AI Act, the Council of Europe’s Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CoE AI Treaty), and countless other global regulatory proposals, all seemingly grounded in the human rights framework. 

The UN recently weighed in with two complementary resolutions, finding consensus at the General Assembly that AI impacts human rights, and even adopting some commendable language warning that certain uses of AI are incompatible with human rights. The UN’s High Level Advisory Board on AI just released a report reiterating that “AI governance does not take place in a vacuum” and “that international law, especially international human rights law, applies in relation to AI.” However, the current reality is anything but rosy. 

What’s threatening rights-based AI governance? 

The first problem is that the same legal instruments being celebrated as groundbreaking, such as the EU AI Act and the CoE AI Treaty, are so full of exceptions, exemptions, and derogations that they make a mockery of any claims of promoting rights-based governance of AI. The EU AI Act fails to ban the most dangerous uses of AI, arbitrarily discriminates against migrant people, and exempts law enforcement and migration authorities from transparency requirements

Meanwhile, both the aforementioned UN General Assembly resolutions — led by the U.S. and China, respectively — explicitly speak only to the “non-military domain;” China’s resolution further specified that it “does not touch the development or use of artificial intelligence for military purposes,” while neither resolution even acknowledged the environmental costs of AI.

The European Center for Not-for-Profit Law (ECNL) notes that, throughout the negotiation of the CoE AI Treaty, “calls to live up to the pledges of horizontal human rights protection without blanket exemptions, avoid double standards for public/private sector, and avert vaguely worded commitments that turn rights into general principles were blatantly ignored.” This double standard is most egregiously illustrated by the fact Israel has signed the treaty even as it deploys dystopian AI targeting systems to automate and accelerate the production of mass kill lists in Gaza. As Digital Action’s Mona Shtaya rightly notes, this contradiction between the commitments made on paper and the reality on the ground not only “exposes a troubling hypocrisy,” but also “raises serious questions about the international community’s commitment to genuine accountability.”

On top of overly broad national security and defense exemptions, we’ve also seen a huge industry lobbying push to water down obligations and carve out broad exemptions. We’ve even heard calls to exclude private actors from the CoE AI Treaty’s scope by default. Innovation, national competitiveness, and other nebulous concepts have been floated as being under threat from efforts to protect people’s rights. The dogma has become “bigger models and more AI everywhere at all costs” — even when those costs are human, natural, or environmental

An additional, more complex threat to human rights-based governance is the growing influence of individuals and organizations aligned with the billionaire-backed effective altruist (EA) community. Anyone participating in AI policy debates will have noticed the mushrooming of advocacy organizations, often with the word “future” in their names, focused either on so-called “AI safety” or on the “long-term” and “existential” risks of AI. At the fringes, some of these organizations have been linked to racist and eugenicist philosophies, but even the more moderate among them take a radically utilitarian approach to AI governance; one which is not only fundamentally misaligned with human rights, but which also distracts from the real harms of AI by focusing public attention on speculative risks instead.

Further focus on the negative impacts of AI is also pulled away by the understandable desire to ensure that its benefits are evenly distributed. Most AI-related resources, power, and know-how are concentrated within a few companies and countries; it’s been estimated that North America accounts for at least 40 percent of global AI revenue. Much of the international policy debate therefore focuses on capacity-building and increasing access to technical infrastructure, computational power, data, and talent, as part of wider digital transformation efforts. 

These factors combine to undermine attempts at taking a strong, rights-based governance approach to AI, while also suiting industry actors who would rather avoid strong regulatory interventions. Even the effective, yet largely voluntary standards developed over decades by the free and open source software community are being distorted by companies eager to unleash their untested products on the global market. Meanwhile, regulatory energy and focus continues to be drained by fear-mongering over science fiction risks that will never materialize, even as very real human rights violations perpetrated by AI-powered surveillance, the distortion of our information ecosystem, and the supercharging of online gender-based violence and child sexual abuse material (CSAM), are ignored.

What needs to happen and why does it matter?

Put simply, developments in AI cannot benefit people if human rights are not respected. International bodies such as the UN and its member states must establish global AI governance norms rooted in international human rights law, starting with acting on the UN High Level Advisory Board on AI’s new report findings. Human rights must be centered in negotiations around international instruments and fora, such as the new Global Digital Compact or the upcoming AI Action Summit, and governments must ensure that human rights defenders, civil society, and legal bodies have a seat at the table in global AI governance discussions.

Internally, the UN must mandate human rights due diligence when its agencies procure or integrate new and emerging technologies into their work. Member states should adequately fund and support the UN Office of the High Commissioner for Human Rights to expand its work with civil society, countries, and companies developing AI, including through its “human rights in the digital space” advisory service; an idea affirmed in the Global Digital Compact. Externally, the UN must expand on the consensus resolutions that have found certain applications of AI to be incompatible with human rights, by defining what those applications are (such as biometric mass surveillance or predictive policing) and taking concrete steps to prohibit them.   

Companies also have a role to play in the process. Many of the largest AI developers and deployers have committed to upholding the UN Guiding Principles on Business and Human Rights; companies should adhere to this framework as they conduct human rights due diligence to properly identify and mitigate AI risks. This also means including civil society, human rights defenders, and impacted communities in discussions around AI governance from the outset, and not merely as a “tick the box” afterthought. 

It is undeniable that the use of AI disproportionately harms and discriminates against already marginalized groups, including women, non-binary, and LGBTQ+ people, and racialized communities. Carving out exemptions and placing profits above people will exacerbate human rights violations, roll back hard-won victories on privacy and data protection, accelerate the climate catastrophe, widen national and global economic disparities, and ultimately allow AI to accelerate injustice rather than benefit humanity. In such a scenario, discussions around the design, scope, and regulation of AI will remain concentrated in the Global Minority, even as people of the Global Majority are exploited to build and test AI systems, without basic human rights safeguards or accountability mechanisms. This is the truly dystopian future promised by unchecked, ungoverned AI — and we can’t let this nightmare become a reality.