
Artificial intelligence and human rights
The use and abuse of AI and automated decision-making can not only facilitate human rights violations and exacerbate existing societal power imbalances, but also open new risks, disproportionately affecting marginalized people and communities. AI design, development, and deployment must respect human rights. We urge governments and companies to follow binding, enforceable legal frameworks rooted in human rights law and principles, not voluntary or self-regulatory ethics-based approaches.

Why we need human rights impact assessments for AI
Around the world, governments and tech companies alike tout artificial intelligence (AI) and forms of automated decision-making (ADM) as cheap, convenient, and fast fixes for a range of societal challenges – from moderating illegal content on social media or scanning medical images for signs of diseases, to detecting fraud and tracking down tax evaders.
SPOTLIGHT: EU AI ACT

Artificial Intelligence
The EU AI Act proposal: a timeline
A summary of our proposed amendments to the draft EU AI Act and a timeline of our related commentary and recommendations.

Artificial Intelligence
The EU AI Act: How to (truly) protect people on the move
The EU AI Act is supposed to protect the rights of everyone impacted by AI systems. But it ignores the systems impacting people on the move. Here are three steps policymakers can take to fix that problem.

Artificial Intelligence
The EU needs an Artificial Intelligence Act that protects fundamental rights
Access Now and over 110 civil society organisations have laid out proposals to make sure the European Union’s Artificial Intelligence Act addresses the real-world impacts of the use of artificial intelligence.

Artificial Intelligence
The EU should regulate AI on the basis of rights, not risks
Artificial intelligence and automated decision-making systems threaten our fundamental rights. Yet the EU is considering an approach to AI regulation that would substitute rights-based protections for a mere risk mitigation exercise by corporations with a vested interest in these systems. Here’s why that’s a grave mistake.
Resources

Artificial Intelligence
Artificial Intelligence: what are the issues for digital rights?
You may have a basic understanding of what AI is. But are you familiar with the issues it raises for your fundamental rights?

Artificial Intelligence
Fighting systemic racism in the digital age: a global challenge
On Human Rights Day of 2020, we highlight the mandate of E. Tendayi Achiume, the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to fight systematic racism.

Digital Security
Track and target: FAQ on Myanmar CCTV cameras and facial recognition
The military junta in Myanmar is rolling out China-made CCTV cameras with facial recognition capabilities to intensify surveillance against the people.

Artificial Intelligence
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Most of us interact with some sort of Artificial Intelligence (AI) system several times a day, whether it’s using the predictive text function on our phones or applying a selfie

Artificial Intelligence
Algorithmic decision-making in the U.S. needs accountability
Access Now endorses the Algorithmic Accountability Act of 2022, which will help combat algorithmic discrimination in defense of human rights.

Artificial Intelligence
Instead of banning facial recognition, some governments in Latin America want to make it official
Buenos Aires, Brasilia, and Uruguay are pushing for use of facial recognition systems for “public security,” seeking to authorize the invasive and harmful use of mass surveillance tools. Civil society must fight back.
Latest Updates

What you need to know about generative AI and human rights
Generative AI has been all over the headlines. But what are the human rights implications? Get the facts in our generative AI FAQ.

Nowhere to turn: How surveillance tech at the EU borders is endangering lives
Surveillance tech at the EU borders is endangering lives. Authorities must #ProtectNotSurveil people on the move.

Amendments To The Draft European Union’s AI Act Prohibit Mass Surveillance And Criminal Profiling

Tech and conflict: a guide for responsible business conduct
This guide is meant to help tech companies think through the impacts of their decisions in the context of conflict.

More Penguins Than Europeans Can Use Google Bard

EU lawmakers’ committees agree tougher draft AI rules

EU draft legislation will ban AI for mass biometric surveillance and predictive policing

Big wins, but gaps remain: European Parliament Committees vote to secure key rights protections in AI Act
In a win for civil society, the European Parliament Committees have adopted their position on the Artificial Intelligence Act — integrating key human rights protections.

How To Delete Your Data From ChatGPT

EU lawmakers ‘hold breath’ on eve of AI vote

We need to bring consent to AI

KI-Moratorium, Twitter-Quellcode, Bürokratt

Italian data protection authority bans ChatGPT citing privacy violations

As AI booms, EU lawmakers wrangle over new rules

Shaping the next 20 years of digital rights in Europe
As we mark the 20th anniversary of EDRi, the European Digital Rights network, it’s an apt moment to reflect on the story of digital rights in Europe so far, and to ask the question: how can we better equip Europe for the human rights challenges of the digital age?
