No summer break for free expression in Europe: Facebook cases that matter for human rights

The summer of 2019 was an intense period for the right to freedom of expression and information in Europe. Besides regulatory pressure on tech companies by governments and global policy initiatives such as the Christchurch Call, European courts issued multiple opinions and judgments. In this blog post, we draw attention to cases that will impact content moderation and freedom of expression online, regardless of whether efforts from the many national and international multistakeholders’ fora actually help “solve the never-ending problems on the internet”.

In June, the Advocate General issued a long-anticipated opinion in the case of Glawischnig-Piesczek v. Facebook, currently pending before the Court of Justice of the European Union (CJEU). Then, in July, and for the first time in European history, Germany fined Facebook for failure to meet the transparency requirements as set forth in the Network Enforcement Act, generally known as NetzDG or the German hate speech law. This is not the only bad news for the social network giant. Facebook is also facing legal action related to the take-down of pages created by an anti-drug abuse NGO in Poland. These three cases are significant for the future of intermediary liability in Europe and the freedom of expression online. 

Eva Glawischnig-Piesczek v. Facebook 

What is the case about?

The case started in the spring of 2016, when a Facebook user posted an article featuring a photo of Ewa Glawischnig-Piescze, then a member of the Austrian Green Party. The post was accompanied by comments calling her “a corrupt oaf”, “lousy traitor”, and “member of a fascist party”. Glawischnig-Piesczek quickly demanded that Facebook remove the post because she claimed that the comments were defamatory and unlawful under the Austrian national law. 

Facebook eventually removed the post but only after the Commercial Court of Vienna issued the interim injunction that ordered the platform to disable access to the post in Austria. The Court agreed with Glawischnig-Piesczek that the comments were “obviously unlawful” and ordered Facebook to actively monitor and block not only identical, but also equivalent, comments shared on the platform. However, this solution was not satisfying to the plaintiff and she  appealed the decision. 

The Higher Regional Court of Vienna confirmed that Facebook should remove any future posts including the identical defamatory comments alongside the picture of Ms. Glawashnig-Piesczek. However, it disagreed with the second part of the original interim injunction that forced Facebook to remove equivalent content. The Court underlined that the active monitoring of equivalent content — that is, comments that convey the same message but in different words — would amount to a general monitoring obligation, which is forbidden by the E-Commerce Directive, the main legal instrument regulating intermediary liability for user-generated content in the EU and its member states. 

As a part of the final appeal to the Austrian Supreme Court, the plaintiff added new demands. Glawischnig-Piesczek’s legal team argued that Facebook should remove the post all across the globe, even in those countries where the content is deemed legal. The Austrian Supreme Court referred the case to the CJEU, as the first content moderation case in CJEU’s history. 

Specifically, the Court will have to determine whether national courts ordering a host provider to remove identically worded comments to the original illegal content amounts to the type of general monitoring obligation prohibited by European law. 

If the answer is no, then the following question is whether such an order may also include similar content and, if so, whether the order applies to content posted in countries outside the national courts’ jurisdictions. 

These questions will have great consequences for  free speech in Europe and possibly beyond. The recent Advocate General Opinion, while it’s only advisory, gives us an insight to what the final outcome might be, and it doesn’t look pretty.

The opinion states that in order to remove identically worded content that has previously been determined to be defamatory, the national courts may order Facebook to monitor every single post shared by each user. Regarding the equivalent content, Facebook also can be required by the courts to take such content down as well, but only from the account of the original content provider. In other words, the AG’s opinion suggests that it is possible to request a social media platform to deploy filters that censor defamatory speech. 

Are the implications positive or negative?

Not good. Similar to hate speech or terrorist content, defamatory expressions are strongly context-dependent. Access Now has repeatedly underlined that automated filters fail to cope with understanding the context and often flag legitimate expressions for take down, or prevent them from being posted. The high error rate of automated tools in moderating online content has also been demonstrated by more recent research findings. 

Furthermore, Ms. Glawischnig-Pieszek is a well known political figure in Austria. It is confusing why the national court considered online slurs against her as clearly unlawful and refused to view them as political speech, that is, a distasteful but still legitimate criticism of political leaders, a form of expression that traditionally gets more robust protection in Western liberal democracies. Allowing application of filters that monitor the political speech of online users will likely lead to the abuse of these measures, especially in countries where ruling elites are actively weakening democracy and the rule of law. 

The opinion does not clarify exactly what is understood by “identical” and “equivalent” content, either. Each time the post is being shared and re-shared by online users, its context and motivation may be drastically different. The list of possible legitimate re-uploads is definitely long, from journalism to satire, humor, and academic purposes. Even if the initial post was deemed illegal at first, this status is subject to constant change with each re-upload. Automated measures are currently unable to assess this contextual background and will inevitably fail to grasp such nuance, no matter how much they improve.

SIN v. Facebook 

What is the case about?

There are numerous reported cases of automated measures illegitimately restricting users’ right to freedom of expression and information. In some cases, online platforms such as YouTube have removed important evidence of human rights abuses in war zones. 

The latest case in Poland is a good example of the far-reaching consequences of mistakes in algorithms for free public discourse online. Between 2018 and 2019, Facebook deleted several fan pages and groups of the Civil Society Drug Policy Initiative (SIN) because the platform claimed that the NGO violated Facebook’s Community Standards. SIN is a Polish civil society organisation that runs educational campaigns on the harmful consequences of drug use, and provides assistance to drug users. Together with the digital rights NGO Panoptykon, SIN brought the case to the court, claiming that Facebook unjustifiably restricted the organisation’s right to freedom of expression and information. SIN and Panoptykon demand that their pages, along with all their followers, be reinstated and that Facebook publicly apologise. 

At the beginning of July, the District Court of Warsaw issued the interim injunction that temporarily prohibits Facebook from removing any other SIN’s fan pages or groups from the platform. Based on the Court’s order, Facebook is obliged to store deleted groups, profiles, and fan pages, together with all published content, followers, likes and comments, while the case is pending. Should SIN win the case, this measure will allow for complete restoration of unjustifiably restricted content. The decision is not final yet.      

Are the implications positive or negative?

We will see. Due to the fact that there is no publicly available Facebook policy on the specifics and technicalities of content moderation practices, we can only speculate about what really happened. The most likely scenario is that Facebook’s algorithms assessed SIN pages as promoting drug abuse instead of preventing it, which accurately demonstrates the contextual blindness inherent in content moderation technologies. 

In recent years, internet intermediaries have been under significant pressure to combat unlawful content shared on their platforms. The political pressure accompanied with shortsighted regulatory efforts has forced private actors to become more proactive in moderating user-generated content, which has led to reported over-removals and censorship.

The case of SIN v. Facebook confirms the strong need to develop transparent internal mechanisms and easily accessible appeal procedures that have to be available both to online speakers and content providers. As Panoptykon rightly pointed out, the case may become instrumental in making Facebook finally take due process safeguards seriously. 

German authority fines Facebook for under-reporting hate speech cases 

What is the case about?

Civil society organisations across Europe are not the only ones whose patience with Facebook has run short. Right at the beginning of July, Germany’s Federal Office of Justice (BfJ) fined the company two million euros for violating the transparency requirements under the Network Enforcement Act (NetzDG). NetzDG requires the platforms that fall under its scope to submit transparency reports in German every six months. The law contains detailed minimum reporting standards that must be met as well as procedures on how companies should deal with users’ complaints.

According to BfJ, Facebook has been under-reporting the number of complaints it received about illegal content. By providing for only certain categories of complaints, it is impossible for the German regulator to draw a realistic picture about the extent of violations. The Minister of Justice, to whom BfJ directly reports, has underlined how increasingly difficult it is for Facebook users to submit complaints about the content violating NetzDG, especially in contrast to Community Standards violations. The NetzDG complaint form, as well as an information page about the law, is relatively hidden on the Facebook page and it takes considerable effort for a user to find it.    

Are the implications positive or negative? 

It may be positive for transparency. When deciding on the fate of the content under NetzDG, Facebook created a two-tiered procedure. First, all reported content will be always reviewed under the Community Standards. If a violation is found, the piece of content will be removed globally. It will never be judged against the NetzDG provisions, nor will it be included in the transparency report submitted to BfJ. Second, if content violates NetzDG but it is in conformity with Facebook’s own rules, it will be blocked only in Germany. This has led to NetzDG practically becoming “the Community Standards Enforcement Act”. Since Facebook does not publicly share data about the number of complaints related to content violating the Community Standards in Germany, it is near impossible to assess how many complaints were made both with the NetzDG and the Community Standards forms.

There is a significant difference between formal and meaningful transparency. While NetzDG was rightly criticised by civil societies, industry, and academia for jeopardising the core principle of free expression, the effort to strengthen transparency is actually the positive aspect of the law. However, this case demonstrates that the transparency requirements related to any online content are not robust enough. If the transparency, especially in connection to reporting mechanisms available to users, is not secured by the legislator, the effective appeal mechanisms can hardly be put in place. This summer, Germany sent out a clear message: no community standards or terms of service can stand above the law. Users have a right to know what happens to their content or complaints and challenge platforms’ decisions effectively. 

What does this mean for the future of freedom of expression in Europe?

In recent years, we have witnessed regulatory efforts at the national and European level that appear to seek magical solutions to societal problems online, such as terrorist content or hate speech. Most rely on filters and content recognition technologies which have limited ability to assess context. Unfortunately, legislators often sidetrack the proper safeguards and requirements for meaningful transparency that should accompany these measures. These three cases therefore reflect the currently shaky situation for freedom of expression and information in Europe. With the upcoming reform of the intermediary liability model for user-generated content, the EU has a unique opportunity to craft a model that will protect human rights with a positive impact at the global level, following in the footsteps of the work done on the General Data Protection Regulation (GDPR). With our advocacy work for human rights-centered content governance, Access Now will try to ensure that this is not a missed opportunity.