
Artificial intelligence and human rights
The use and abuse of AI and automated decision-making can not only facilitate human rights violations and exacerbate existing societal power imbalances, but also open new risks, disproportionately affecting marginalized people and communities. AI design, development, and deployment must respect human rights. We urge governments and companies to follow binding, enforceable legal frameworks rooted in human rights law and principles, not voluntary or self-regulatory ethics-based approaches.

What you need to know about generative AI and human rights
In this explainer piece, we’re cutting through the hype to get to the truth about what generative AI can (and can’t) do, and why it matters for human rights worldwide.
SPOTLIGHT: EU AI ACT

Artificial Intelligence
The EU AI Act proposal: a timeline
A summary of our proposed amendments to the draft EU AI Act and a timeline of our related commentary and recommendations.

Artificial Intelligence
The EU AI Act: How to (truly) protect people on the move
The EU AI Act is supposed to protect the rights of everyone impacted by AI systems. But it ignores the systems impacting people on the move. Here are three steps policymakers can take to fix that problem.

Artificial Intelligence
The EU needs an Artificial Intelligence Act that protects fundamental rights
Access Now and over 110 civil society organisations have laid out proposals to make sure the European Union’s Artificial Intelligence Act addresses the real-world impacts of the use of artificial intelligence.

Artificial Intelligence
The EU should regulate AI on the basis of rights, not risks
Artificial intelligence and automated decision-making systems threaten our fundamental rights. Yet the EU is considering an approach to AI regulation that would substitute rights-based protections for a mere risk mitigation exercise by corporations with a vested interest in these systems. Here’s why that’s a grave mistake.
Resources

Artificial Intelligence
What you need to know about generative AI and human rights
Generative AI has been all over the headlines. But what are the human rights implications? Get the facts in our generative AI FAQ.

Artificial Intelligence
Artificial Intelligence: what are the issues for digital rights?
You may have a basic understanding of what AI is. But are you familiar with the issues it raises for your fundamental rights?

Artificial Intelligence
Fighting systemic racism in the digital age: a global challenge
On Human Rights Day of 2020, we highlight the mandate of E. Tendayi Achiume, the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to fight systematic racism.

Digital Security
Track and target: FAQ on Myanmar CCTV cameras and facial recognition
The military junta in Myanmar is rolling out China-made CCTV cameras with facial recognition capabilities to intensify surveillance against the people.

Artificial Intelligence
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Most of us interact with some sort of Artificial Intelligence (AI) system several times a day, whether it’s using the predictive text function on our phones or applying a selfie

Artificial Intelligence
Algorithmic decision-making in the U.S. needs accountability
Access Now endorses the Algorithmic Accountability Act of 2022, which will help combat algorithmic discrimination in defense of human rights.

Artificial Intelligence
Instead of banning facial recognition, some governments in Latin America want to make it official
Buenos Aires, Brasilia, and Uruguay are pushing for use of facial recognition systems for “public security,” seeking to authorize the invasive and harmful use of mass surveillance tools. Civil society must fight back.
Latest Updates

Watermarking & generative AI: what, how, why (and why not)
How can we identify content produced by generative AI tools? One solution being touted is watermarking – but what are the human rights risks?

Identifying generative AI content: when and how watermarking can help uphold human rights
Access Now’s paper on how we can identify content produced by generative AI tools. One solution being touted is watermarking – but what are the human rights risks?

Civil society groups want broader restrictions on biometrics use in AI Act

EU policymakers: regulate police technology!
Civil society calls on EU policymakers to regulate policy technology.

Joint statement: EU legislators must close dangerous loophole in AI Act
Big Tech has lobbied to introduce a major loophole to the EU AI Act’s high-risk classification process. We call on EU legislators to maintain a high level of protection in the AI Act.

An open letter to the RightsCon community about RightsCon Costa Rica and what comes next
We explain the challenges and exclusion some participants faced, apologize and take accountability for our role, and share thoughts on the road ahead.

Rules to keep AI in check: nations carve different paths for tech regulation

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Early adopters in Mexico lend their eyes to global biometric project

POLÉMICA por WORLDCOIN en ARGENTINA: ¿se recomendable entregar los datos de tu IRIS por DÓLARES?

Vietnam orders social media firms to cut ‘toxic’ content using AI

SHIFT – Leben in der digitalen Welt

Civil society groups call on EU to put human rights at centre of AI Act

EU Trilogues: The AI Act must protect people’s rights
As EU trilogues on the AI Act kick off, Access Now and partners call on institutions to put fundamental rights first.

The EU still needs to get its AI Act together
