Why Facebook’s proposed hate speech policy on Zionism would only add fuel to the fire

Facebook is proposing yet another problematic hate speech policy that would undermine its users’ freedom of expression. Pressured to combat surging hate speech and anti-Semitism on its platform, the company is looking into how it should moderate the use of the word “Zionist,” and whether to add the term as a protected category under its hate speech policy. We don’t think that is a good idea, particularly given Facebook’s inability to strictly adhere to human rights principles in its content moderation practices.

Facebook’s problematic proposal

Essentially, under the new hate speech policy Facebook would remove “attacks” against Zionists when the term is used as a proxy for Jewish or Israeli. Facebook defines attacks as “as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.”

The policy has stirred significant criticism and fury among progressive Jewish, Palestinian, and Muslim groups. For one, it imposes a single and inaccurate worldview in which Zionists are synonymous with Jews, a view which many activists spoke publicly against. But most importantly —  and this is where we are particularly concerned —  the policy would lead to further censorship of legitimate political speech.

Freedom of expression under threat

For years, activists and human rights groups in Palestine and across the Middle East and North Africa have contested giant platforms’ disproportionate targeting of content generated by communities that are already marginalized and oppressed. Online space is especially important for people in these communities to freely express their views and bypass state-sanctioned censorship and repression. In this thorny context, and as rights advocates point out in the international campaign, “Facebook, we need to talk,”  the policy “will prohibit Palestinians from sharing their daily experiences and histories with the world.” It would also “prevent Jewish users from discussing their relationships to Zionist political ideology.”

Disproportionate targeting of Palestinian content, under a specialized set of platform policies, has been on the rise since 2016. Platforms have repeatedly shut down or suspended the accounts of journalists, media organizations, grassroots activists,  and movements — not to mention the accounts of ordinary users — under the pretext of removing hate speech or extremist content. They have also targeted expressions or content that are critical of Israeli policies and human rights violations, which has prompted Palestinian users to launch campaigns protesting censorship of their voices on Facebook, Twitter, and YouTube. Palestinians also reported “algorithmic tweaks” around key political events or developments which crippled their outreach. For example, people with Palestinian and Arabic-speaking Facebook pages reported a 50-80% drop in their outreach during the so-called “peace deals” between Israel, Bahrain, and the United Arab Emirates.

The dangers of this policy should be seen in light of ongoing pressure by governments or pro-government groups to stifle free speech. From Israel’s Cyber Unit sending platforms tens of thousands of illegal “voluntary” requests to remove content, to the arrest of hundreds of Palestinians over their online posts, to the public pressure on companies to adopt the controversial definition of anti-Semitism by the International Holocaust Remembrance Alliance (IHRA), there is  already a push for censorship that Facebook’s hate speech policy will only exacerbate.

Problem of defining hate speech

There is no universally accepted definition of hate speech at international level. The ambiguity about what expression constitutes hate speech has political as well as human rights implications. On one hand, the lack of harmonized understanding of what constitutes hate speech creates a space for abuse when platforms arbitrarily restrict lawful expressions, including political speech. On the other, the vagueness gives cover for the lack of action by governments and private actors to address hate speech with potential societal harm, including the silencing and discrimination against vulnerable and underrepresented groups. 

Determination of what expression amounts to hate speech strongly depends on the social, political, and historical context. There are forms of hate speech expressions that are potentially harmful but not necessarily illegal. When you combine this inherent vagueness with the difficulty platforms have in performing detailed contextual analysis at scale, and add to that the targeting of fluid terms such as “Zionism,” it is a recipe for censorship and mistakes that will violate the human rights of Facebook users.  

Problem of operationalization

To implement this policy and respect human rights, Facebook must understand context and nuances. Considering Facebook’s record in tackling complex issues like this, we are skeptical it can roll out the policy without discrimination. Zionism is an evolving and widely debated political ideology, and without adequate assessment of the context and history, it is impossible to box this dynamic concept into a definition a few lines long. Facebook relies on a commercial content moderation approach based on assessing and deleting a single piece of content. In order to tackle large quantities of shared content, it has to deploy automated decision-making tools that are unable to assess the context of expressions accurately. Even in straightforward cases, Facebook often delivers erroneous decisions. Their ability to automate the very sensitive task of judging whether something constitutes hate speech — let alone “Zionism” — will always be profoundly limited.

Policies such as this once again center the underlying issue with online content moderation: powerful companies are calling the shots on what political speech is permissible on the internet and determining what the contours of these discussions are. 

Due to its dominance and power to control the public sphere, Facebook is capable of setting the standard for what is permitted online globally, as well as the technologies used to implement this standard, to the detriment of transparency and accountability. Thanks to the diligent work of our local partners, we have witnessed how unevenly, unfairly, and sometimes neglectfully Facebook implements its Terms of Service, especially in the parts of the world where the platform is not subjected to any public scrutiny and regulatory pressure.

It is crucial that the fight against anti-Semitism and online hate speech is deployed within the framework of international human rights law instead of through politically motivated ad hoc policies. As civil society has poignantly raised in the ongoing campaign against the proposed policy: “Facebook scrutinizing specific words won’t keep any of us safe, but it will prevent us from connecting on the political issues important to all of us and block us from holding people and governments accountable for their policies and actions.”  

Read more about the proposed hate speech policy by Facebook here.