Cover Image: It's not a glitch: how Meta systematically censors Palestinian voices

It’s not a glitch: how Meta systematically censors Palestinian voices

Content note: The following post contains references to violence and war.

Since Hamas attacked Israel on October 7, 2023, and Israeli forces began bombarding Gaza in response, Palestinian and pro-Palestinian voices have been censored and suppressed across Meta’s platforms. This latest wave of censorship, which coincides with “apocalyptic” violence in the Gaza Strip and stark warnings of genocide from the UN and the International Court of Justice (ICJ), adds to Meta’s long history of systematically censoring Palestine-related content. While the company has stated that it’s never their “intention to suppress a particular community or point of view,” our documentation points to the opposite conclusion. This pattern of censorship is no glitch. 

In this report, we show how Meta is systematically silencing the voices of both Palestinians and those advocating for Palestinians’ rights. We delve into the roots of this censorship, highlighting how the company must overhaul its rights-violating and discriminatory content moderation policies and take action to avoid complicity in alleged war crimes, crimes against humanity, and genocide.

Table of contents

Patterns of censorship on Instagram and Facebook

Soon after Israel began bombarding Gaza last October, Palestinians and people sharing pro-Palestinian messages began to report that their content was being censored and suppressed on social media platforms, including on Facebook and Instagram. Platforms suspended or restricted the accounts of Palestinian journalists and activists both in and outside of Gaza, and arbitrarily deleted a considerable amount of content, including documentation of atrocities and human rights abuses. 

Examples of this online censorship show that it is rampant, systematic, and global. For instance, Human Rights Watch (HRW) has documented 1,049 cases of peaceful content expressing support for Palestine, originating from more than 60 countries around the world, being removed between October and November 2023. Meanwhile, the Palestinian Observatory for Digital Rights Violations (7or) has documented around 1,043 instances of censorship between October 7, 2023 and February 9, 2023, including on Facebook and Instagram. From content removals to opaque restrictions, the examples below illustrate the main patterns of censorship on Meta’s platforms documented since October 7, 2023. Some of the cases we mention were reported directly to us, while others were shared publicly by impacted individuals.

NoteIf you can’t see the infographics below, please check your privacy-enhancing browser extensions. Open in desktop view for the best experience.

Arbitrary content removals
Suspension of prominent Palestinian and Palestine-related accounts
Restrictions on pro-Palestinian users and content
Shadow-banning

Why is Meta censoring Palestinian voices?

Meta’s censorship of Palestinian voices and Palestine-related content is far from new. In recent years, however, it has become increasingly pronounced, with a well-documented pattern of systematic censorship, algorithmic bias, and discriminatory content moderation emerging. During the 2021 Sheikh Jarrah protests, social media content expressing support for Palestinian rights was deleted, removed, and shadowbanned, while users sharing such content were suspended or prevented from commenting or live-streaming, and pro-Palestinian hashtags were suppressed; problems Meta brushed off as a “technical issue.” 

As our latest documentation illustrates, Palestinian journalists and activists’ accounts are routinely suspended or restricted, with content removed arbitrarily. This systematic censorship is particularly rampant in times of crisis; a by-product of opaque and discriminatory content moderation rules, enforced in a way that disproportionately impacts historically oppressed and marginalized communities.

Flawed and discriminatory content moderation policies  

Meta’s censorship is catalyzed by its problematic DOI policy, most recently updated in December 2023. This policy prohibits the glorification, support, and representation of designated groups and individuals “in an effort to prevent and disrupt real-world harm.” Although this rightly intends to tackle online incitement to violence, the policy’s vague and overly broad interpretation of what constitutes “glorification” or “support” of such individuals and groups creates a sweeping net that ends up capturing legitimate content, which should be protected by the right to freedom of expression and opinion. 

By and large, Meta does not disclose who it designates as “terrorist,” nor will it share how and why it does so. The company does acknowledge that its designations include entities blocklisted by the U.S. government as foreign terrorist organizations (FTOs) or specially designated global terrorists (SDGTs), but the full list remains a secret. However, a leaked version published in 2021 by media outlet The Intercept revealed how the majority of groups and individuals Meta labels as “terrorist” come from the Arab and Muslim world. These include Palestinian political factions and their armed wings, such as Hamas and the Popular Front for the Liberation of Palestine (PFLP).

A human rights due diligence report commissioned by Meta into its content moderation actions during the 2021 conflict in Israel/Palestine confirms the damaging impact of such secretive and politicized designations. BSR, which conducted the investigation, found that Meta’s over-moderation of Palestine-related content was largely due to “Meta’s policies which incorporate certain legal obligations relating to designated foreign terrorist organizations.” The report noted that Palestinians were more likely to be seen as violating Meta’s DOI policy “because of the presence of Hamas as a governing entity in Gaza and political candidates affiliated with designated organizations.” This, in essence, can be understood to mean that Palestinians may be automatically censored or have their accounts shut down for posting content merely mentioning groups such as Hamas, even if the content in question is factual reporting or critical of Hamas. 

But it isn’t only the DOI policy that is biased. Under its hate speech policy, Meta removes content that is critical of “Zionists.” While the company claims it only removes content where the word “Zionist” is used as a proxy to attack Israeli or Jewish individuals and groups, this policy was widely criticized by human rights organizations and progressive Jewish and Muslim community groups in 2021. In February 2024, four months into the unfolding genocide in Gaza, Meta began new civil society consultations with a view to possibly expanding the scope of its policy enforcement. As Access Now has previously warned, any use of historically and politically complex terms such as “Zionism” should be considered with careful nuance and deliberation. Implementing a blanket policy that automatically flags any mention of “Zionism” opens the door to censorship and abuse. Already in 2017,  The Guardian published leaked Meta content moderators’ training materials, which  showed a slide deck on “Credible Violence: Abuse Standards” listing “Zionists” among global and local “vulnerable” groups. Such special treatment of a political ideology undermines people’s right to freedom of expression and stifles critical public debate online.

Inconsistent and discriminatory rule enforcement

Meta’s response to the current Israeli assault on Gaza has been explicitly punitive and discriminatory, compared with, for instance, how it responded to Russia’s illegal invasion of Ukraine in February 2022. This follows a historical pattern of implementing content moderation rules that prioritize the protection of Israeli users at the expense of Palestinian or pro-Palestinian users’ rights, even if unintentionally, as BSR’s 2022 human rights due diligence report highlighted. The report notes, for instance, that Arabic content was over-moderated while Hebrew content was under-moderated, due to the fact that —despite Hebrew-language hate speech and incitement to violence being rampant across Meta’s platforms — the company did not develop a Hebrew language classifier to detect and remove such content. In September 2023, Meta said it had completed the creation of a Hebrew language classifier for hostile speech. However, it was reported that as of October 2023 this still wasn’t operational.

Despite evidence of an unfolding genocide in Gaza and rising violence against Palestinians around the world, Meta has largely ignored threats to Palestinians’ safety. Since October 7, Meta has failed to properly moderate the unprecedented explosion in hate speech, dehumanization, and genocidal rhetoric against Palestinians disseminated across its platforms, including by Israeli officials and verified Israeli state accounts with a wide reach. 

For example, in November 2023, a 7amleh investigation revealed that Meta had approved paid advertisements by a right-wing Israeli group calling for the assassination of a pro-Palestine activist in the U.S., as well as ads calling for a “holocaust for the Palestinians”’ and for “Gazan women and children and the elderly” to be wiped out. Israeli pages and targeted ads promoting the ethnic cleansing of Palestinians in the West Bank and the Gaza Strip have also multiplied since October, but again, this isn’t a new trend: in 2021, Jewish settlers used WhatsApp to organize violent attacks against Palestinian citizens of Israel.

The Wall Street Journal has reported on how, following the October 7 attack,  Meta manipulated its content filters to apply stricter standards to content generated in the Middle East and specifically Palestine. Specifically, it lowered the threshold for its algorithms to detect and hide comments violating Community Guidelines from 80% to 40% for content from the Middle East and to just 25% for content from Palestine.

Meta’s double standards are further illustrated by two cases recently adjudicated by the company’s Oversight Board, which show how Meta made exceptions to its DOI policy to allow content about Israeli hostages to be shared on its platforms. In the first instance, Meta initially removed content showing Israeli civilians being taken hostage, in alignment with the DOI policy, but later restored it under an exception to the policy, citing the need to spread awareness about the plight of the hostages and debunk disinformation around October 7 attacks. 

Civil society made repeated requests for Meta to also allow policy exceptions for content showing the harm done to Palestinians in Gaza, but these were consistently dismissed, with Meta continuing to aggressively censor Palestinian voices and silence their stories. 

Arbitrary and erroneous rule enforcement

As we’ve seen time and time again, Meta’s content moderation tools, particularly its automated decision-making, are poorly trained, not fit for purpose, and especially ill-suited for use in non-English languages. Leaked documents from 2020 show that Meta’s algorithms for detecting terrorist content erroneously deleted non-violent Arabic content 77% of the time. Given how much Meta relies on automated content moderation tools, this is an unacceptable error rate. As such, Meta’s recent modification to its DOI policy to allow for neutral or critical references to designated groups in the “context of social and political discourse” will be difficult to implement at scale. Distinguishing between neutral references and content praising designated groups requires a nuanced understanding of such content’s  political, regional, and historical context, which Meta’s algorithms do not have.

Meta’s lack of transparency around its tools, their accuracy, and error rates, coupled with a penalty strike system that punishes individuals who violate the DOI policy multiple times, is a disastrous recipe for censoring legitimate, non-violent speech on its platforms – and this pattern is only magnified when Meta over-relies on automated tools to moderate Instagram and Facebook content.

Human rights can’t be cherry-picked

Meta cannot pick and choose when it respects human rights and ensures the safety of its users – and when it doesn’t. As previously noted in our “Declaration of principles on content governance in times of crisis,” Meta and other social media platforms must develop rights-respecting crisis protocols to identify and mitigate the negative impact of their content moderation policies and practices on people’s fundamental rights and freedoms, especially in times of war. 

In the context of the unfolding genocide in Gaza, Meta must take extra care with how it moderates content to ensure that Palestinians and their supporters can safely and freely access and share information. 

In 2021, a coalition of civil society organizations called on Meta to stop systematically censoring Palestinian voices by overhauling its content moderation practices and policies. More than two years later, our demands remain unmet. It is high time that Meta addresses this issue once and for all.