Cover Image: It's not a glitch: how Meta systematically censors Palestinian voices

Meta is revisiting its hate speech policy on Zionism — here’s why it’s (still) a bad idea

As Israel’s war on Gaza approaches the six month mark, Meta is (re)considering how to moderate content involving the term “Zionist.” But as we explore in this post, isolating any political ideology from criticism is a slippery slope which can restrict online spaces, shut down political dissent and discourse, and, in the current context, fuel censorship in times of war. Meta must distinguish between speech that targets ideas or ideologies, which is protected even when offensive, and speech that incites hostility, hatred, and discrimination against human beings

What is Meta’s current policy on use of the term “Zionist”? 

Under its hate speech policy, Meta removes “direct attacks against people” on the basis of protected characteristics such as race, ethnicity, nationality, and religion. Political ideologies are not considered a protected characteristic. However, Meta’s current policy removes the word “Zionist” when “used explicitly as a proxy for Jews or Israelis in a dehumanizing or violent way.” Already in 2021, Meta planned to add the term “Zionist” as a protected category under its hate speech policy — something Access Now and civil society partners warned against at the time

How is Meta proposing to update its policy? 

Meta is now considering placing even stricter limits on how the term “Zionist” can be used across its platforms. In late January, the company began consulting with civil society, including Access Now, about what content it should remove under its revised hate speech policy. Meta shared examples of statements it might censor, such as “Zionists are war criminals, just look at what’s happening in Gaza” or “I don’t like Zionists.”

Why would this policy change threaten freedom of expression? 

The premise for this proposed policy change is flawed from the outset. Zionism is not an inherent characteristic of any particular individual or group, in the same way that gender, religion, or race are. And removing content using the word “Zionist” contravenes Meta’s own Hate Speech Community Standards, which define hate speech as attacks against “people – rather than concepts or institutions” on the basis of their protected characteristics.

Criticism of the political ideology adhered to by Zionists, who may or may not be Jewish, is distinct from hate speech directed at Jews for being Jewish. The former is a form of political speech protected by the right to freedom of expression and opinion; the latter is antisemitism. Making such an important distinction and moderating this kind of content requires careful, context-specific, and human-led deliberation that puts human rights first — something that, as we have previously pointed out, cannot be achieved via a blanket “one size fits all” policy, and certainly not via the automated content moderation increasingly used by Meta.

The proposal also contravenes international human rights standards. Article 19 of the International Covenant on Civil and Political Rights (ICCPR) protects expression of “all kinds,” including written and non-verbal “political discourse,” and the UN Human Rights Committee has made it clear that this extends to expressions that may be considered “deeply offensive.” Censoring a post that says Zionists are war criminals, as in the example given by Meta, is therefore a violation of freedom of expression under the terms of the ICCPR.

Why is this the wrong solution to hate speech?

The increase in antisemitism online is a serious threat. However, Meta’s response must be guided by respect for all human rights. As Jewish, Muslim, and Palestinian civil society voices have said, this policy “won’t make any of us safer. Instead, it will undermine efforts to dismantle real antisemitism and all forms of racism and bigotry.” If implemented, this proposal will likely be instrumentalized to censor legitimate criticism of Israel, which would in turn silence Palestinian human rights activists. 

The timing of Meta’s proposal also raises questions about the company’s priorities and motivations. Since October 7, 2023, inflammatory speech, genocidal rhetoric, and incitement to violence against Palestinians has exploded, both online and offline. UN human rights experts have voiced their alarm at the “discernibly genocidal and dehumanizing rhetoric coming from senior Israeli government officials” — rhetoric brought as evidence in the ongoing International Court of Justice’s (ICJ) case against Israel for possible genocide in Gaza. The UN Committee on the Elimination of Racial Discrimination also issued a warning against “the racist hate speech, incitement to violence and genocidal actions, as well as dehumanizing rhetoric targeted at Palestinians since October 7, 2023 by Israeli senior government officials, members of the Parliament, politicians and public figures.” 

At a time of unprecedented atrocities being perpetrated against the Palestinian people, Meta’s decision to focus on protecting a particular political ideology seems out of step with the urgency of addressing wider, systemic violence and discrimination. 

What should Meta do instead? 

Even before October 7, previous human rights due diligence into Meta’s content moderation actions in Israel/Palestine showed that Meta’s approach to Palestinian content has been biased, violating users’ right to freedom of expression, non-discrimination, and political participation. Despite this, anti-Palestinian, anti-Arab, Islamophobic, and antisemitic hate speech have continued to proliferate on Meta’s platforms. As noted by the UN Human Rights Committee, there is no hierarchy between protected characteristics, and discrimination on all grounds must be treated equally and seriously. This underscores the need for more robust measures to combat bigotry and intolerance in all its forms, not increased protections for specific political ideologies. 

As we emphasized in our reaction to the Oversight Board’s fresh advisory opinion on how Meta should moderate the Arabic term “shaheed” — another controversial policy that accounts for most content takedowns across Meta’s platforms — scrutinizing politically charged words or expressions at scale is a tried-and-failed recipe for content moderation. 

Meta should immediately abandon this policy proposal, and focus instead on addressing all kinds of hate speech and violent rhetoric, based on international human rights standards, rather than its own political biases.