Artificial intelligence and human rights
The use and abuse of AI and automated decision-making can not only facilitate human rights violations and exacerbate existing societal power imbalances, but also open new risks, disproportionately affecting marginalized people and communities. AI design, development, and deployment must respect human rights. We urge governments and companies to follow binding, enforceable legal frameworks rooted in human rights law and principles, not voluntary or self-regulatory ethics-based approaches.
What you need to know about generative AI and human rights
In this explainer piece, we’re cutting through the hype to get to the truth about what generative AI can (and can’t) do, and why it matters for human rights worldwide.
SPOTLIGHT: EU AI ACT
A summary of our proposed amendments to the draft EU AI Act and a timeline of our related commentary and recommendations.
The EU AI Act is supposed to protect the rights of everyone impacted by AI systems. But it ignores the systems impacting people on the move. Here are three steps policymakers can take to fix that problem.
Access Now and over 110 civil society organisations have laid out proposals to make sure the European Union’s Artificial Intelligence Act addresses the real-world impacts of the use of artificial intelligence.
Artificial intelligence and automated decision-making systems threaten our fundamental rights. Yet the EU is considering an approach to AI regulation that would substitute rights-based protections for a mere risk mitigation exercise by corporations with a vested interest in these systems. Here’s why that’s a grave mistake.
Generative AI has been all over the headlines. But what are the human rights implications? Get the facts in our generative AI FAQ.
You may have a basic understanding of what AI is. But are you familiar with the issues it raises for your fundamental rights?
On Human Rights Day of 2020, we highlight the mandate of E. Tendayi Achiume, the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to fight systematic racism.
The military junta in Myanmar is rolling out China-made CCTV cameras with facial recognition capabilities to intensify surveillance against the people.
Most of us interact with some sort of Artificial Intelligence (AI) system several times a day, whether it’s using the predictive text function on our phones or applying a selfie
Access Now endorses the Algorithmic Accountability Act of 2022, which will help combat algorithmic discrimination in defense of human rights.
Buenos Aires, Brasilia, and Uruguay are pushing for use of facial recognition systems for “public security,” seeking to authorize the invasive and harmful use of mass surveillance tools. Civil society must fight back.
Watermarking & generative AI: what, how, why (and why not)
How can we identify content produced by generative AI tools? One solution being touted is watermarking – but what are the human rights risks?
EU policymakers: regulate police technology!
Civil society calls on EU policymakers to regulate policy technology.
Joint statement: EU legislators must close dangerous loophole in AI Act
Big Tech has lobbied to introduce a major loophole to the EU AI Act’s high-risk classification process. We call on EU legislators to maintain a high level of protection in the AI Act.
An open letter to the RightsCon community about RightsCon Costa Rica and what comes next
We explain the challenges and exclusion some participants faced, apologize and take accountability for our role, and share thoughts on the road ahead.
EU Trilogues: The AI Act must protect people’s rights
As EU trilogues on the AI Act kick off, Access Now and partners call on institutions to put fundamental rights first.