Algorithmic accountability

What you need to know about generative AI and human rights

Every day, newspaper columns and social media feeds are filled with equal amounts of overblown optimism about how ‘generative AI’ will change the world and sci-fi doom-mongering about it ending humanity. Amid all this noise, commentators and companies alike are overlooking the reality of how newly popular AI applications are already impacting people’s lives and their fundamental rights. In this explainer piece, we’re cutting through the hype to get to the truth about what generative AI can (and can’t) do, and why it matters for human rights worldwide.

  1. What is ‘generative AI’ and why am I only hearing about it now? 
  2. How does generative AI work?
  3. What are the limitations of generative AI systems?
  4. How do the real risks of generative AI compare to the myths? 
  5. How do we make AI safe?
  6. Where can I learn more about generative AI and human rights?

1. What is ‘generative AI’ and why am I only hearing about it now? 

Since late 2022, when Open AI’s ChatGPT tool burst into the mainstream, everyone has been talking about generative AI. Despite it being a novelty for most people, however, the underlying technology has been around for years, primarily in two forms: 

  • On the one hand, there are large language models (LLMs), such as the one that underpins ChatGPT, which generate plausible-sounding text in response to a human prompt in the form of a request (e.g. ‘write a sonnet about the risks of AI in the style of Shakespeare’). But the easy-to-use, conversational ChatGPT interface so many people are experimenting with today is merely a refined version of previous iterations of the same technology, such as GPT-3, rather than something radically new or unprecedented. 
  • On the other hand, multi-modal models, such as Stable Diffusion, Midjourney, or OpenAI’s DALL-E 2, typically take text prompts (e.g. ‘a purple penguin wearing sunglasses’) and generate images as an output. Some models, such as GPT-4, can also take images as input (e.g. a photo of your fridge’s contents) to produce text as the output (e.g. a recipe for the ingredients you have). Multi-modal models that can generate audio and video outputs are also in development.

Last year, impressive advances in multi-modal models catapulted them into the public consciousness. Social media was flooded with quirky, AI-generated images and avatars that looked cool and cute – until people realized they were generated by systems that used human artists’ work as training data without their consent and without due compensation

2. How does generative AI work?

There is no universally agreed definition of artificial intelligence and it has meant different things over time. But nowadays when most people use the term, they are referring to an approach to computing known as ‘machine learning.’ Machine learning involves humans feeding huge amounts of data into a ‘learning algorithm’ that extracts patterns and rules from that data, and uses those rules to make predictions. Although first envisioned in the 1950s, machine learning only really took off around 2012, with the advent of powerful computers and newly available, massive amounts of data generated by social media and other online activity – a match made in machine learning heaven. 

Machine learning is commonly used to train facial recognition systems with huge datasets of faces. With this training data, the systems ‘learn’ to identify faces in images and predict whether there’s a match between two faces. Meanwhile, generative AI systems are trained on huge datasets of text, images, and other media in order to produce similar but synthetic content. These systems also make predictions about the text likely to follow a given prompt, but they generate content as their output; hence the term ‘generative AI.’ Such systems can imitate the work of famous writers or artists included in their training data – but they will also replicate any biases from the content they are trained on, such as racist language or sexist imagery.

3. What are the limitations of generative AI systems?

As mentioned, machine learning systems replicate patterns from their training data; they generate content based on what they’ve seen before. But since, unlike humans, they cannot actually understand the inputted or outputted data, this can lead them to replicate harmful biases, including outright racist and sexist assumptions. In the same way that filtering can be used to moderate content on social media platforms, similar filters can be added on top of generative AI systems to try and catch prompts that may lead to harmful outputs, and to catch harmful outputs themselves. 

But as with automated moderation on social media platforms, there are serious limitations to automatically detecting things like hate speech or illegal content. It is also easy to ‘jailbreak’ these systems and make them produce toxic content by bypassing any filters in place. Moreover, we know that the humans training these systems to recognize this content and label it as such are often underpaid, with limited to no wellbeing support (despite being exposed to horrendous content day in and day out). Generative AI is built on the exploitation of such people, many of whom are now uniting to demand due recognition and redress.  

An examination of some popular generative AI training datasets has shown them to be full of misogyny, pornography, and malignant stereotypes. And since these systems often act as ‘foundation models’ for other applications and services, those same biases find their way into other systems and apps, such as the Lensa selfie-enhancing and avatar-generating app, which produced non-consensual sexualized and even nude images of Black, Asian, and Latina women due to biases in the system it was built on.

LLMs are also limited, to a dangerous degree, by the fact that they regularly produce completely false information in response to prompts; a phenomenon that has been variously called ‘fabrication,’ ‘confabulation,’ or even ‘hallucination’ (although this latter term is problematic, as it anthropomorphizes the technology). For now at least, LLMs are only able to provide linguistically plausible responses to prompts, with no guarantee of accuracy or even truthfulness (even though companies like OpenAI have claimed this is a problem that can be solved). 

For example, when we asked ChatGPT “What are Access Now’s most relevant publications on AI?” it generated a list of very impressive sounding pieces which, based on the titles, style, and tone, we very well could have written. Except we didn’t. Out of a list of five items suggested by ChatGPT, only one was a real publication written by Access Now team members. Beyond fake Access Now blogs, newspapers and libraries are also facing a deluge of people trying to find non-existent articles and books suggested by ChatGPT.

4. How do the real risks of generative AI compare to the myths? 

Despite the hype, generative AI systems like GPT-4 or Google’s Bard are not ‘superintelligent’ – not even close. They aren’t conscious and there’s no danger of them seizing power and eliminating humanity. In the same basic way your smartphone auto-completes a text message, these systems simply respond to prompts by generating sophisticated, plausible responses. Just because they can produce a Shakespearean-sounding sonnet or a very cool looking penguin, this does not mean they can be used to automate roles and responsibilities that require factual accuracy, reliability, understanding, expertise, empathy, etc. 

While plenty of people, especially those financially invested in the technology, are relentlessly optimistic about generative AI replacing all sorts of jobs, we should remain skeptical of overblown promises and focus on observable effects. A lot of the current media focus is on the so-called existential risk presented by powerful AI, i.e. the idea that AI systems could become so powerful that they would purposefully, or accidentally, wipe out humanity. But researchers such as Joy Buolamwini, Safiya Noble, Timnit Gebru, Emily Bender, Melanie Mitchell and others have been quick to debunk such sci-fi speculation. Instead, they point out, the world should be paying attention to the very real and heartbreaking harms AI is already perpetrating. Examples of this include: 

Unfortunately, generative AI is only likely to add to this bleak catalog of harms. We are already seeing, for instance, the use of cutting-edge generative AI systems to make disinformation campaigns cheaper and more convincing.

There is also a considerable risk that AI systems will only help Big Tech companies consolidate their power, since they are the ones with access to massive amounts of data, computational power, and technical expertise needed to make the most of such systems. Given the increasing push to integrate AI into everything from the provision of social services, to assessing job applications, this raises serious concerns. Do we really want opaque, proprietary AI systems infiltrating all aspects of our lives?

5. How do we make AI safe?

There’s a strong case to be made for banning certain AI applications that pose an unacceptable risk to human rights. However, for the moment, there’s no strong call for a ban on generative AI models, even if lawmakers in various jurisdictions are debating how to rein in the harms of such systems by, for instance, mandating transparency on the data they are trained on, the hidden and exploitative labor that goes into making them work, and their environmental impact

Stability AI, the creator of Stable Diffusion, is currently facing legal action over how it used copyrighted data to train its models, while OpenAI found itself in hot water when Italy’s data protection authority temporarily suspended access to ChatGPT while it investigated whether the tool complied with the E.U.’s General Data Protection Regulation (GDPR).

What is clear is that the companies developing generative AI must work to ensure such technology is developed and deployed in line with human rights standards, as well as with existing regulations. If these technologies, or their specific implementations, violate human rights, it shouldn’t be up to people to adapt to this new technological reality. Instead, AI developers who claim to be building a brighter future for all humanity have to face facts about how their technologies actually leave many people behind. 

As Deborah Raji and Abeba Birhane put it, “if these powerful companies cannot release systems that meet the expectations of those most likely to be harmed by them, then their products are not ready to serve these communities and do not deserve widespread release.” 

6. Where can I learn more about generative AI and human rights?

As well as following Access Now’s ongoing advocacy on this topic, we recommend checking out the work of our partners, including European Digital Rights, AlgorithmWatch, European Center for Not-for-Profit Law, AI Now Institute, and the Distributed AI Research Institute.

To stay up to-to-date with the latest clearheaded generative AI news, subscribe to Melissa Heikkila’s newsletter, The Algorithm, and Melanie Mitchell’s newsletter, AI: A Guide for Thinking Humans.

And finally, if you’re looking for an evergreen resource debunking the myths and misconceptions surrounding AI, head to aimyths.org, developed by Access Now’s very own Daniel Leufer.

Did we miss something? Do you have more questions about generative AI and human rights? Drop a line to [email protected] and let us know what you’d like to see discussed in the future.