|

Saving our agnostic internet, part I: censorship and free expression

The internet that we use today has no morals. That’s not to say that it’s immoral, only that it does not and cannot make value judgments. What does this mean for governments and tech companies responding to threats online?

The role of these actors in addressing threats has been in the news a lot this year, in particular in discussions about harmful speech — specifically, the use of social media platforms to disseminate and spread misinformation, so-called “fake news,” and/or terrorist propaganda. Meanwhile, people in oppressed and marginalized groups have continued to face harassment on social media platforms, and violent hate groups have used these platforms to organize mass gatherings.

In response, governments around the world are pushing companies like Facebook, Twitter, and Google to “do more.” Unfortunately, the proposals to address these issues have ranged from the dreadful to the truly atrocious, typically focusing on ways to block or filter out the “bad” stuff. This has significant implications for free expression and the future of the open internet. In a three-part series, we’ll take a closer look at what governments and companies can or should do to address harmful speech online. In this post, we look at issues surrounding censorship and content removal. In part two, we will examine questions of anonymity, and in part three, we will explore the role of algorithms in policing speech.

Dealing with “bad” speech: what not to do

The companies behind internet communications tools and social media platforms have provided people with untold opportunities. Around the world, activists use these platforms to share information, to connect, and to organize. They help us communicate and get closer to loved ones. They also provide a portal through which we can take collective action. However, the internet doesn’t distinguish “good” from “bad” uses of these tools or platforms, and what we do can be either — or both. When governments see an increase in what they consider “bad” activity, high-ranking officials around the world often respond by asking the companies that enable the activity to limit access to or dissemination of specific content.

At the most extreme, these government proposals seek to force companies to monitor, interpret, police, and (sometimes) block user content. These are all different acts of censorship, and can interfere with the human rights to privacy and freedom of expression. Some content can be limited in line with human rights law; after all, the right to freedom of expression is considered internationally a “qualified” right. But free expression ought to be limited only when the limit is absolutely necessary to serve a legitimate government interest.

Repressive governments abuse the qualification to this right. For this reason, it is vital that any limitations to the right to freedom of expression are narrowly construed and codified in law. Otherwise some governments will restrict the important, legitimate speech activities that contribute to an advanced society. They will limit research, discourage dissent, or silence unpopular ideas. Categories of speech that states seek to limit include child pornography, obscenity, or false statements of material fact. In the United States, “incitement,” or speech that advocates an intended, likely, imminent act of violence, is not lawful speech. In addition, Germany and France limit what they deem “hate speech” as well as speech related to Nazism. Some laws to limit speech are over-broad, and can be abused in ways that harm free expression. A hate speech law in Spain was recently used in the prosecution of eight Catalan teachers who criticized police violence during the referendum on Catalan independence.

Many current government proposals seek privatized enforcement for speech laws, delegating the role of censor to private companies without adequate judicial oversight or public accountability. From our perspective, it appears that lawmakers use bold statements about company responsibility to appear strong and make people feel better. However, these statements in fact abdicate responsibility for the problem, pushing companies to implement a solution when the government itself does not yet have an answer. This is dangerous, since private companies are not held to the same human rights standards as governments, and without proper human rights protections, any authority delegated to them will almost certainly be exercised over-broadly. That could lead to removal of lawful content on a mass scale, including vital commentary and analysis of current events. That’s because business is by nature profit-driven. While on some occasions companies may take strong positions to protect speech, they are also likely to avoid legal liability by reducing risk.

This is not speculation. Civil society has long shown that companies will often remove legitimate content because they face claims under intellectual property law. Governments intensifying the pressure on companies to deal with “extremist content” has also led to this kind of removal, such as when YouTube removed videos documenting the war in Syria.

All of this leads to the question of how companies define “bad” content. If we leave this to the discretion of the companies, we get ever-shifting standards for removal that could change without transparency. A company could use these invisible standards to support its own private political agenda, or to bend to the whim of a new government regime. In either case, human rights would suffer, and the censorship is likely to have the biggest impact on already marginalized populations.

Our recommendations: rights-based, user-centric solutions with transparency

So if privatized censorship isn’t the solution to “bad” speech, what is? Since human rights apply online just as they do offline, we can start by taking tested rules for governing the freedom of expression and applying them to the internet, using concrete standards and transparent processes. This issue is really two issues, so let’s break them down:

  • When should companies be required to take down content?

Governments should not pressure companies to take down content, disable user accounts, or block access to web pages or services beyond what is required by law, which must be consistent with standards under international human rights law. Any takedown or other form of censorship must be both necessary to serve a legitimate government aim and proportionate to achieving that aim. The takedown should occur only after a competent judicial authority, like a judge, has determined that the speech at issue is not lawful, and the takedown should be fully transparent and reflected in corporate transparency reports, published at least once a year.

  • When should companies initiate removal of content?

Companies have the obligation to respect human rights. Most companies have terms of service and community standards which are public and which they can enforce to remove (or de-prioritize) content or suspend accounts. We highly recommend that these policies be designed with human rights in mind and have clear avenues for remedy. In addition, any removals should be enforced equitably and transparently, with clear notice to the user of why their content or profile has been removed, how to appeal the decision, and, for account suspension, what the process is for remediation. Government officials should not use these voluntary removals as an end-run around official legal process and companies should take steps to avoid allowing them to do that.

Anything beyond this risks giving companies far too much authority.

Beyond the removal of content or user accounts, there is much we can do to address problems with “bad” content. However, there is no single solution to stop the spread of misinformation, dismantle troll armies, end online harassment, or prevent terrorists from recruiting online. These are significant societal issues that require solutions that will bolster, not undermine, human rights.

Conversations on how to proceed are well underway. There are efforts to give users resources for improving media literacy, helping them to identify scams or fabricated content for themselves. Technologists are working to make voting systems and other infrastructure for campaigns and elections secure, and they should be supported in these endeavors without having to worry about governments pushing for back doors or exceptional access to our technology. Projects like the Redirect Method may have their own issues, but it is worth exploring developing tools that can be applied to a company’s platform transparently.

Finally, but perhaps most importantly given the effect on democratic discourse and the integrity of our elections, it is well past time for countries like the United States to consider the impact of behavioral profiling and targeting. Without comprehensive data protection laws in the U.S. and elsewhere to limit corporate data collection and retention, people do not have the capacity to control the data that can then be stolen, exploited, and used against them.

The internet is not coded with morals, designed to transmit only what a handful of governments or companies consider “good” content. That is its strength, as governments cannot shut off the internet only for those who disagree with them (although some are quite evidently attempting to do just that). Yet it’s also challenging, since people can use the internet to do harm to others. Attempts to assert more control can have a tremendous negative impact on human rights, including potentially silencing minority and dissenting viewpoints. If we want to protect the internet  as a vehicle for enjoyment of human rights, our approach should be to turn to time-tested human rights standards, increase transparency, and develop solutions that put people and their rights at the center.