The EU must hold VLOPs accountable

How the Digital Services Act could hack Big Tech’s human rights problem

These are extraordinary times for digital rights worldwide — and we don’t mean that in a good way. A handful of tech companies are serving as global gatekeepers for our enjoyment of fundamental rights, such as the rights to privacy and free expression. We are seeing disinformation, hate speech, and weaponisation of our personal data hurt targeted communities and warp free democratic discourse. To make matters worse, in many cases lawmakers are pushing short-sighted legislative “solutions” — like deputising private companies to police speech, or demanding increasingly swift removal of content — that would further damage human rights.

The European Union has the opportunity to address these problems using an approach that will establish clear responsibilities for private actors and hold them to account, while ensuring our rights are protected. Today, as the European Commission continues to deliberate on the Digital Services Act (DSA) legislative package, we publish a series of position papers with policy recommendations designed to put the rights of users at the centre — and better protect those rights going forward. While the series is aimed at the EU in the context of the DSA package, some of the recommendations can translate to other national contexts. They are informed by our global position on content governance published earlier this year.

Systemic regulation is justified and necessary. There is an enormous power imbalance between the large platforms and the people who are using them. In recent years, continuous scandals and increased media coverage have made it clear to the public that companies are exercising this power and making enormous profits without taking sufficient responsibility to safeguard people’s rights.

Furthermore, these platforms have failed to provide meaningful transparency, and the business models that are built on data harvesting without proper safeguards do not give people adequate control, either over their data or the information they receive and impart. We are not in a position to understand the extent to which our data are being used, nor are we able to determine the extent to which automated decision-making is leveraged in the curation or amplification of content. We cannot gauge the impact of these automated processes on our exposure to diverse content, and we cannot study or prevent the discriminatory treatment of underrepresented groups.

The DSA legislative package, currently prepared by the European Commission, is a unique chance to create systemic regulation of gatekeeper platforms based on human rights standards, while making the rights of users the utmost priority. The framework — which has been referred to as the “second GDPR” — can establish European rules for addressing problems such as online hate speech and disinformation, increasing transparency in online political advertising, and ensuring fairness in e-commerce. This could not only transform the regulation of gatekeepers in Europe, it could also serve to set global standards for content governance in the same way that its older sibling, the GDPR, has done for safeguarding our personal data.

This opportunity does not come without risk. It is imperative that this legislation serves to protect human rights and empower users, not undermine our rights and inadvertently make companies more powerful. Our set of papers addresses the issues we have identified as priorities in our policy and advocacy work to ensure that the DSA does just that.

In brief, we warn lawmakers against deputising companies like Facebook to censor undesirable but not illegal content; spell out ways they can ensure any content restrictions follow legally binding fundamental rights; propose gradually increasing responsibilities for the companies that serve as gatekeepers; propose building in transparency requirements and avenues of redress for people using the platforms; and, applying learnings from the GDPR, propose creating an effective oversight body with sufficient funding and teeth to ensure platform compliance.

The series offers:

I. Human rights-based legal framework for intermediary liability – This paper defines adequate response mechanisms for addressing illegal content and suggests built-in safeguards for due process to strengthen protections for the fundamental rights of users, a framework with which online platforms must comply.
II. Human rights response to the amplification of potentially harmful legal content – This paper proposes a tiered approach to meaningful transparency and defines specific requirements for companies that use so-called open content recommendation systems, which are in line with the potentially broader obligations for algorithmic decision-making systems.
III. Proposal for the effective oversight and enforcement model within the DSA legislative framework – This paper proposes a functioning cooperation mechanism between a network of independent national regulators, a newly established DSA coordination body at the EU level, and finally, a new European regulator responsible for monitoring the compliance of online platforms with procedural human rights safeguards.

Download the full series