The Mobile World Congress (MWC) is the world’s largest gathering for the mobile industry, organised by the GSMA. I was invited to MWC Shanghai to chair the “Think, AI Summit” and present Access Now’s work on artificial intelligence (AI) and human rights. This massive trade show attracted more than 550 exhibitors with 60,000 attendees, including 4,000 conference participants.
The speakers at Think, AI explored the opportunities and benefits AI can bring, including the transformation of healthcare, lifestyle, and telecoms.
In the discussions on healthcare, we saw, for instance, smartphone solutions for hearing screening in countries where the access to health services is so low that the regional ratio was 1.2 million people per ear, nose, and throat surgeon, 0.8 million people per audiologist, and 1.3 million people per speech therapist. Though the benefit of these solutions is obvious, a survey presented at the conference showed that there is still distrust of the technology; only 54% of respondents were “willing to engage with AI and robotics for their healthcare needs” and the answer varied significantly according to the respondent’s country of residence.
Debate on lifestyle applications of AI explored a wide range of use cases, from the personalisation of video games to improving voice recognition in language learning apps, to the smart city initiative in Helsinki where an autonomous minibus is operating a regular, scheduled service in normal traffic on public roads.
The focus of the telecoms sessions was 5G — reflecting the overall theme of the larger conference — and here the presenters put a lot of emphasis on necessary improvements with regard to security.
Risks and concerns: Whither privacy?
A comment from one speaker captures what seemed to be the overall attitude and modus operandi among participants: “We want AI to come to everybody’s life”. As the only civil society representative at the event, I responded by observing that companies and governments should pose the question differently. They should look into whether everybody wants AI to come to their lives, and at what price. Besides bringing benefits, AI and other emerging technologies introduce new challenges and can negatively impact the end-users of technology, both on an individual and societal level.
One striking omission in the presentations by nearly everyone who spoke throughout the day was the specific privacy implications of the technologies being discussed. I noted in particular lack of engagement on the issue of facial recognition technology, both in the commercial sector and policing. In authoritarian countries, facial recognition-based surveillance systems are used to control society, regardless of their lack of accuracy or human rights implications, and this effort is taking place in symbiosis with the rapidly developing tech sector. Chinese companies — some of which were present at the MWC — are developing globally competitive applications for image and voice recognition. There is a growing political narrative in Europe aimed at creating and maintaining public fear of terrorism, and it’s becoming clear that residents of democratic countries around the world are not immune to similar efforts to monitor our lives, whether online or off.
AI and human rights
To mitigate these risks, Access Now has started to work with NGO partners and both the public and the private sector to ensure that the design, development, and deployment of AI and other emerging technologies are user-centric and respect human rights.
Dunja Mijatović, the Council of Europe Commissioner for Human Rights, in her recent statement, Safeguarding human rights in the era of artificial intelligence, calls for strict regulations to protect fundamental rights. She urges stronger cooperation between state actors (governments, parliaments, the judiciary, law enforcement agencies), private companies, academia, NGOs, international organisations, and the public at large, and calls for increased efforts to teach AI literacy.
“Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them.”
Used in systems for policing, welfare, online discourse, and healthcare – to name a few examples – machine learning technologies can reinforce existing power structures and inequalities on an unprecedented scale. There is a substantive and growing body of evidence to show that if they are adopted without safeguards, machine learning systems — which are often opaque, and use processes that can be hard to explain — can easily become a tool for discriminatory or repressive practices.
The U.S. criminal justice system already uses algorithms as a basis for decisions on bail, sentencing, and parole in individual cases. Private businesses develop and sell technology and systems that rely on algorithms without adequate transparency. The technology is not only opaque but evidence shows that it is also biased against people of color. In Newark, New Jersey, Black people comprise 54% of the population, but are subject to 85% of pedestrian stops and 79% of arrests, and were 2.5 times more likely to be stopped by the police than their white counterparts. As Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, writes in a blog post for the ACLU, “No system or tool is perfect. But we should not add to the problems in the criminal justice system with mechanisms that exacerbate racism and inequity”.
The Toronto Declaration
Access Now and Amnesty International launched the Toronto Declaration at RightsCon 2018 to protect the right to equality and non-discrimination in machine learning systems. The Declaration is only a first step of a collaboration among industry, academia, civil society, and government representatives. It asserts that we must adhere to principles of inclusion, diversity, and equity to ensure that machine learning systems do not create or perpetuate discrimination, particularly against already marginalised groups.
In the coming months and years, we will work toward the international recognition of the Toronto Declaration. This includes advocating for its adoption in resolutions and recommendations by international bodies such as the Council of Europe and the United Nations Human Rights Council; working with states to embed the principles in government policy; and encouraging companies to incorporate them in their business practices, as part of their responsibility to respect human rights. We will also be collaborating with our partners to explore the practical application of the Toronto Declaration to the design of emerging technologies.
The declaration is currently open for endorsement by everyone, including individuals, NGOs, companies, governments, and international organizations. Already, Access Now and Amnesty International have welcomed the endorsement of Element AI, Human Rights Watch, Wikimedia Foundation, Paradigm Initiative Nigeria, and other rights groups and NGOs working in this area. We encourage you to join us in endorsing these principles to promote the rights to equality and non-discrimination in the digital age.
A special thanks to the organisers at GSMA for the opportunity to participate in MWC, and we will continue our engagement to work toward rights-respecting products, policies, services, and innovation in the tech and telecoms sector, and in the field of artificial intelligence.