Algorithmic accountability

Experts are finished, politicians to deliver – the Council of Europe publishes expert recommendations on the human rights impacts of algorithmic systems

On 4 November 2019 the Council of Europe expert committee finalised recommendations on the human rights impacts of algorithmic systems. Access Now welcomes the paper with great enthusiasm. We especially appreciate the high level of expertise reflected in the guidelines for states and private-sector actors addressing the impact of algorithmic systems on individuals’ human rights. We urge the member states to adopt the text in due course and ensure its implementation by both the public and private sector.  

The recommendations were prepared in the Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). The expert paper is now final. It was sent to the Committee on Media and Information Society (CDMSI) and is now publicly available. The CDMSI consists of governments’ representatives nominated by member states. They can table modifications and new text directly to the Committee of Ministers. The expert committee’s work is now finished and it depends on the committee members what the final outcome will be. According to the publicly available timeline, their review will take place in early December. Access Now urges all representatives in CDMSI to adopt the strongest possible recommendations to prevent and mitigate the negative impacts of algorithms on human rights. 

In August 2019, Access Now and the Wikimedia Foundation submitted joint comments on the expert committee’s draft recommendations on the human rights impacts of algorithmic systems. We will analyse the revised recommendations based on our suggestions submitted in the consultation process as outlined below. 

In our response to the consultation we first  (1) outline our suggestions for further edits of the main principles stated in the recommendation’s preamble; then (2) propose concrete improvements for the guidelines governing states’ obligation and private-sector actors’ responsibilities to protect the human rights of online users in the context of algorithmic systems; and finally (3)  emphasise the most essential elements of the guidelines that Access Now fully supports and has been advocating for in our own work.     

The primary aim of our suggestions for edits and additions was to strengthen the ideas already present in the draft recommendation, and in some cases to align individual recommendations with other Council of Europe statements on algorithmic systems, in particular by adopting the language from the Council of Europe Human Rights Commissioner’s recommendations (Unboxing Artificial Intelligence: 10 steps to protect human rights). 

  • The right to freedom of assembly needs to be explicitly addressed by the recommendations. The specific functionality of digital tracking has a clear impact and chilling effect on freedom of assembly of individuals.
  • Conducting human rights impact assessments for algorithmic systems deployed by public authorities  should be mandatory for states. States should ensure a meaningful external review of AI systems by an independent oversight body with relevant expertise in order to mitigate possible human rights harms caused by these systems . 
  • States should support independent research on the potential of algorithmic systems for creating positive human rights effects and for advancing public benefit. The interests of marginalised and vulnerable individuals and groups have to be adequately taken into account and represented. The assessment of public societal benefit must go beyond the interest of the majority or better represented individuals or groups. 
  • Private-sector actors that develop algorithmic systems should comply with industry standard human rights due diligence frameworks to avoid fostering or entrenching discrimination. Having said that, it is of utmost importance to distinguish between human rights impact assessments as state obligations and human rights due diligence processes on the private sector’s side.  

In addition to our suggestions to further strengthen the draft recommendations, we highlighted the key elements of the draft that Access Now is particularly supportive of:

  • Pointing to the specific risk associated with public sector use of algorithmic systems in the provision of public services;
  • We strongly support the emphasis on transparent and open public procurements.
  • The strong stance on transparency requirements and pushing back on unjustified limitations thereto. 

Our comments and suggestions are based on Access Now’s research on AI, in particular our two reports on AI and human rights, Mapping Regulatory Proposals for Artificial Intelligence in Europe and Human Rights in the Age of Artificial Intelligence. The former report covers regional strategies from the European Union and the Council of Europe as well as national plans from several member states. The latter report provides a comprehensive analysis of the potential pitfalls of AI, and how to address AI-related human rights harms.

In addition to this work on AI, Access Now have been involved in the development of the European Union’s work on AI as our Europe Policy Manager, Fanny Hidvegi, was selected to join the European Union’s High-Level Expert Group on Artificial Intelligence (AI HLEG). We published our preliminary recommendations to improve the Ethics Guidelines on Trustworthy AI, and the positives and negatives of the Policy and Investment Recommendations for Trustworthy AI.

We also incorporate the principles laid out in the Toronto Declaration on Equality and Non-Discrimination in Machine Learning spearheaded by Access Now and Amnesty International at RightsCon 2018. The Toronto Declaration sets out tangible means of upholding equality and non-discrimination and clarifies the obligations of states and responsibilities of companies to protect human rights in the development and use of machine learning; it further emphasises the importance of accountability for human rights harms and the important of access to effective remedy to those who suffer human rights harms in the context of machine learning systems.

The Wikimedia Foundation is a signatory of the Toronto Declaration and supports policy that protects internet users from discrimination which negatively affects their human rights. Collaborative online projects can be negatively impacted by the mandatory use of algorithmic systems to monitor or filter the information that is uploaded by the users of a platform. Algorithmic systems should aid and support humans in their ability to participate in culture and the digital economy. The Council of Europe’s draft Recommendation on the Human Rights Impacts of Algorithmic Systems gives welcome guidance for the regulation, development, and use of such systems.   

We look forward to working with the Council of Europe to ensure that the design, development, and deployment of AI-assisted technologies is individual-centric and respect human rights. Lawmakers on both national and regional level are keen to adopt “something on AI”. With the delivery of these recommendations on the human rights impacts of algorithms, the committee of experts has reinforced the Council of Europe in its key position to ensure that the recommendations are translated into concrete, actionable policies.