Ghana’s anti-LGBTQ+ bill

EU content regulation: what’s the (online) harm?

In recent years, national legislators in EU Member States have been pushing for new laws to combat negative societal phenomena such as hateful or terrorist content online. These regulatory efforts have one common denominator: they shift the focus from conditional intermediary liability to holding intermediaries directly responsible for the dissemination of illegal content on their platforms.

Two prominent legislative and policy proposals of this kind that will significantly shape the European debate around the future of intermediary liability are the UK White Paper on Online Harms and the newly adopted Avia law in France.

UK experiment to fight online harm: over-blocking on the horizon

In April 2019, the United Kingdom (UK) government proposed a new regulatory model including a so-called statutory duty of care, saying it wants to make platform companies more responsible for the safety of online users. The paper foresees a future regulation that holds companies accountable for a set of vaguely predefined “online harms” which includes illegal content, but also users’ behaviours that are deemed harmful but not necessarily illegal.

EDRi and Access Now have long emphasised the risk that privatised law enforcement and heavy reliance on automated content filters pose to human rights online. In this vein, multiple civil society organisations, including EDRi members, have warned against the alarming measures the British approach takes. To avoid liability, the envisaged duty of care, combined with heavy fines, create incentives for platform companies to block online content even if its illegality is doubtful. The regulatory approach proposed by the UK Online Harms White Paper will actually coerce companies into adopting content filtering measures that will ultimately result in the general monitoring of all information being shared on online platforms. Due to over-compliance with states’ demands, such conduct often amounts to illegitimate restrictions on freedom of expression or, in other words, online censorship. Moreover, a general monitoring obligation is currently prohibited by European law.

The White Paper doesn’t only address illegal content but has rather a very broad scope that includes phenomena such as online disinformation and terrorist content. This is highly problematic in regard to the human rights law criteria that guide restrictions on freedom of expression. The ill-defined and vague concept of “online harms” cannot serve as a proper legal basis to justify an interference with fundamental rights. Ultimately, the proposal falls short in providing substantial evidence that sustains its approach. It also bluntly misses to address key issues of online regulation, such as content distribution on platforms that lies in the core of companies’ business models, opacity of algorithms, violations of online privacy, and data breaches.

French Avia law: Another “quick fix” to online hate speech?

Inspired by the German Network Enforcement Act (NetzDG), France is in adoption process of its own piece of legislation, the so-called Avia law – named after the Rapporteur of the file, MP Laetitia Avia. Similarly to NetzDG, the law requires companies to remove manifestly illegal content within 24 hours from receiving a notification about it.

Following its German predecessor, the Avia law encourages companies to be overly cautious and preemptively remove or block content to avoid substantial fines for non-compliance. The time frame in which they are expected to take action is too short to allow for a proper assessment of each case at stake. Importantly, the French Parliament does not discard the possibility for companies to resort to automated decision-making tools in order to process the notices. Such measure in itself can be grounded in the legitimate objectives to fight against hatred, racism, LGBTQI+-phobic, and other discriminatory content. However, tackling hate speech and other context-dependent content requires careful and balanced analysis. In practice, leaving the decision to private actors without adequate oversight and redress mechanisms to decide whether a piece of content meets the threshold of “manifest illegality” will be damaging for freedom of expression and the rule of law.

However, there are also positive aspects of the Avia law. It provides safeguards for procedural fairness by establishing the requirement for individuals who notify platforms about potentially illegal content to state the reasons why they believe it should be removed. Moreover, the law sets out obligations for companies to establish internal complaints and appeal mechanisms for both the notifier and the content provider. Transparency obligations on content moderation policies are also introduced. Lastly, the regulator established by the Avia law does not focus its evaluation solely on the amount of content removed but also on scrutinising over-removal when monitoring compliance with the law.

Do not fall into the same trap!

We are currently witnessing regulatory efforts at the national and European level that seek to provide easy solutions to online phenomena such as terrorist content or hate speech, ignoring the underlying societal issues. Most of the suggested solutions rely on filters and content recognition technologies with limited ability to assess the context in which a given piece of content has been posted. Legislators often sidetrack proper safeguards and requirements for meaningful transparency that should accompany these measures. However, it is not only the EU and its Member States where similar trends can be observed. For instance, the Australian government recently adopted a new bill imposing criminal liability on executives of social media platforms. Section 230 of the American Communication Decency Act (CDA) may be placed under the review process triggered by a rumored presidential executive order that could significantly limit the liability protections granted to platform companies by the existing law.

Legislators around the globe have one thing in common: the urge to “eradicate” vaguely defined “online harms”. The rhetoric of danger comprised in online harm has become a driving force behind regulatory responses in liberal democracies. This is exactly the kind of logic frequently used by authoritarian regimes to restrict legitimate debate. With the upcoming Digital Services Act (DSA) potentially replacing the E-Commerce Directive in Europe, the EU has an extraordinary opportunity to become a trend-setter, establishing high standards for the protection of users’ human rights, while addressing legitimate concerns stemming from the spread of illegal online content.

For this to happen, the European Commission should propose a law that imposes workable, transparent, and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and state regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

This post was coauthored with Chloé Berthélémy from European Digital Rights (EDRi). The post is also available on EDRi’s website.