|

Access Now responds to Special Rapporteur Kaye on “Content Regulation in the Digital Age”

Governments globally are moving aggressively to coerce private internet intermediaries to restrict online content under the banner of fighting “fake news,” countering terrorism, or combating illegal content and hate speech. Efforts like the those in France and Brazil to “ban” false information on social media platforms create a minefield for human rights. David Kaye, U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, is conducting a study to explore the issues at stake, and Access Now has submitted evidence for the report.

Access Now is deeply concerned about the trends we are seeing around the world that force companies to rely heavily on automated systems to police and manage content, despite the fact that understanding context is absolutely critical for determining whether content is legal.  According to a survey by Freedom House, 65 percent of governments ask “companies, site administrators, and users to restrict online content of a political, social, or religious nature.” Our own data support these findings; since January 2016, Access Now’s Digital Security Helpline managed 204 cases —  half from Syria — that involved the flagging, removal, or blocking of online content. This is particularly alarming because activists often rely on internet communication and platforms to document and expose human rights violations, and what they publish can be subject to over-broad censorship. In 2011, activists used YouTube to show that a peaceful protest against Syrian President Bashar Assad’s mandate had become bloody. Over the past year, activists have found that videos providing critical evidence of atrocities have been removed.

The content standards at the largest internet platforms often fail to protect vulnerable communities from online harassment. Since 2013, our Helpline has handled 166 cases for groups or individuals that defend women’s rights. Around half of these clients requested security consultations and sought advice on assessing their practices and protecting the privacy of their communications, while the other half asked for rapid response aid. For the past two years, the most common requests to our Helpline from these types of groups have been for assistance with compromised or potentially compromised accounts; harassment incidents; and secure communications and protection of websites.

Our submission for Kaye’s report explores these trends in detail, giving examples to show current content regulation practices threaten freedom of expression rights. We also provide a set of recommendations to help companies meet their responsibility to respect fundamental rights,  including the implementation of procedural safeguards to ensure that companies’ content management programs do not lead to disproportionate and unnecessary restrictions to free expression.  

Key recommendations include:

  • Internet intermediaries should incorporate human rights law and norms as an integral part of their content management programs
  • Companies should not be responsible for evaluating the legality of content in the absence of rule-of-law mechanisms
  • Companies should interpret governments’ jurisdictional authority appropriately narrowly to minimize the adverse impact of takedown orders on the right to freedom of expression
  • Removal of disputed content should take place only when the content has been specifically adjudicated as being illegal and a court order has been issued
  • Platforms must ensure that governments do not use flagging to circumvent legal protections for freedom of expression
  • Companies must ensure that policies for restricting or taking down content are enforced in an equitable and transparent manner
  • Companies should ensure that their takedown and appeal policies are clear and public
  • Companies should protect users’ rights to access information and engage in free expression by notifying them of restrictions or takedowns as expeditiously and transparently as possible
  • Online platforms should commit to allowing the use of pseudonyms in appropriate circumstances
  • Platforms should require that complaints against people based on a real name policy are backed by evidence so these complaints are not abused to victimize people

You can read our full submission for the report here.