||||

Artificial Intelligence: What role for the European Union?

As the European Union launches its strategy on Artificial Intelligence (AI), we look into its role and potential impact in the global race for AI leadership.

Artificial Intelligence: the future is now

We may be not living in a science fiction scenario filled with intelligent machines just yet, but the precursors for Artificial Intelligence (AI) are already part of many people’s everyday life. The current public debate surrounding AI encompasses a wide range of fields and processes without any widely agreed-upon definitions, either from a technological or legal standpoint. The concept includes everything from advanced algorithms and machine learning that can be used to analyse, predict, and make decisions about individuals, to the autonomous machines and killers robots we imagine as we get closer to the “singularity”. While we are still far away from the latter, we are using advanced algorithms when we use search engines, in the production and analysis of credit ratings, voice and text recognition, instantaneous translations, insurance determinations, job applications, autonomous vehicles, criminal justice, and more.

With the ongoing development of machine learning and the increased use of algorithms, the role of AI in our daily lives will also increase. But are we ready for it? How are governments approaching this transformation?

AI for humanity: a promising concept

Over the past several years, world leaders from Paris to Moscow, Washington to Beijing, have been engaging in a frenetic AI race. According to Russian president Vladimir Putin, the country that leads in AI “will be the ruler of the world”. In the EU, decision-makers fear lagging behind China and the United States in investment in AI. As a result, countries are frantically encouraging companies to open AI centers in the EU with increased investments in AI research and innovation. The newly introduced EU strategy on AI aims to boost EU industrial and tech capacity through public and private financial support. While the goal is to increase investment in AI to at least €20 billion by 2020, the EU Commission will start by increasing its funding of the Horizon 2020 research and innovation programme by €1.5 billion for the period 2018-202. The EU also hopes to trigger an additional €2.5 billion of funding from existing public-private partnerships, for example in big data and robotics. Finally, the EU will take additional legislative and financial measures to incentivise investments from the private sector into AI research.

In this AI FOMO, are countries forgetting to ask themselves the crucial question: what type of AI do we really want for our societies?

We should not rush through the adoption of AI simply for the sake of innovation. Not every innovation means progress for society, especially if its impacts are not carefully considered and, if need be, mitigated.

In the United States, tech companies are making massive investments in AI, yet the recent data breaches and scandals regarding misuse of data, from Equifax to Facebook, clearly show the dangers of moving forward without a comprehensive data protection and privacy framework. Without the right foundation, there is a tangible risk that AI technology developed in the US will violate the fundamental rights of individuals.

We also need to carefully consider the outcomes that we want from the use of automated processes. Do we want to use these processes to increase productivity, ensure accuracy, or guarantee non-discrimination?

For instance, let’s take a look at the use of algorithms in the US for criminal justice procedures. Upon arrest in a US system, it common for suspects to get attributed a score aimed at predicting the likelihood that they will commit a crime in the future. One of the systems that is most commonly used was developed by a company called COMPAS, the Correctional Offender Management Profiling for Alternative Sanctions. The “risk assessment” score that the COMPAS system provides is used to inform decisions about whether a suspect can be set free at each stage of the criminal justice process, from setting bonds to assessing the likelihood for acceptance of a plea bargain. An investigation by ProPublica revealed that the system has a distinct racial bias. It has falsely flagged black defendants as future criminals at nearly twice the rate as white defendants. At the same time, white defendants were mislabeled as low risk more often than black defendants. Because of these scores, which reflect observable prejudice, it is not uncommon for suspects to plead guilty even if they are innocent, or to accept longer sentences, in order to get out of pre-trial detention. The scoring method and the design of the system may have made the criminal court system more “productive” by limiting the number of cases going to trial, but that benefit seems to come at the expense of fairness, justice, and equality.

Meanwhile, in China, authorities are testing an AI-based social credit system. Starting this month, Chinese citizens who rank low on the system could be banned from buying plane or train tickets for up to a year. These measures are likely to affect the poorest individuals in China, creating further inequalities in the country.

At the same time, in Russia, the government is focusing on the military applicability of AI and foresees the possibility of the country becoming the leader in “AI arms”. The danger of “AI warfare” sparked a response from Elon Musk and 116 other technology leaders, who sent a letter to the United Nations calling for new regulations on how such AI weapons are developed. Similarly, more than 3,000 Google employees recently protested the company’s involvement in the US Defense AI drone programme. The employees sent a letter to Google’s CEO, Sundar Pinchai, stating that “Google should not be in the business of war.”

So yes, the EU may lag behind on AI investment — at the moment — but does that mean it’s “losing” the AI race?

The EU is far from getting everything right and we have our share of discriminatory algorithms for human resources or insurance processes, (unlawful) profiling techniques to analyse air passengers, and more. But the EU and its member states are looking at an AI strategy at a time when comprehensive data protection reform is about to become applicable and as negotiations on the strengthening of Europe’s privacy rules are taking place. What is more, France has launched the country’s AI strategy around the concept of AI for Humanity. Similarly, the EU strategy recommends the creation of ethical guidelines for the use of AI anchored in the EU Charter for Fundamental Rights which will build on the work of the European Group on Ethics in Science and New Technologies.

Let’s not be naive. For the time being, these are mostly PR packages and the implementation of these strategies will determine the benefits or harms they will bring for individuals and society. We currently see a lot of references to ethics in the proposed strategies, which can be positive. But ethics is a flexible concept, subject to local context, norms, and interpretation. Most importantly, ethics are not enforceable. With ethics as the backdrop, there is no guaranteed protection for users.

Can the EU lead on human rights in AI? Yes.

In contrast to “ethics”, universal human rights frameworks such as the Charter of Fundamental Rights are a cornerstone of our societies, helping to protect individuals online and off. Further, international guiding principles have been developed for the implementation of human rights in the economy and they should be applied in the context of AI. For instance, the United Nations Guiding Principles on Business and Human Rights reiterate the government’s duty to protect human rights, the corporate responsibility to respect human rights, and the need to guarantee access to remedy for victims of business-related abuses.

It’s imperative to embed these human rights frameworks in every aspect of the deployment of AI.  The EU now has the potential — and the responsibility — to champion AI for Humanity. Let’s not miss this opportunity.