Corporate transparency Social card

Meta’s shareholders: We need transparency on content moderation

Companies like Meta have enormous power to impact the human rights of people around the world, including those suffering crisis, conflict, and war. Shareholders have the opportunity to push them to do better — aligning their practices with human rights obligations. As Meta’s Annual General Meeting (AGM) approaches,  we are paying particular attention to shareholder proposal eight on human rights risks in non-U.S. markets.

The shareholders advancing the proposal are asking Meta to issue a report on human rights risks that includes data on the number of content moderators fluent in local languages in the five largest non-U.S. markets for Instagram and Facebook, relative to the number of users. They are also asking for an assessment by external, independent, and qualified experts of the effectiveness of Meta’s measures for managing hateful content, disinformation, and incitement to violence on those platforms. 

Here’s why this proposal matters for human rights and why Meta’s shareholders should vote for it.

META CONTINUES TO FAIL TO PROTECT PEOPLE IN ITS NON-ENGLISH SPEAKING MARKETS

Meta has faced frequent and ongoing scrutiny of its lack of investment in content moderation in non-English speaking markets. It has failed to be transparent about its content moderation efforts or related human rights impact assessments; in fact, Meta has a habit of announcing plans for this, that, and the other, but then neglecting to follow up or disclose specific details about exactly how it plans to make good on its commitments. This nonchalant approach has already led to serious harm in countries like Ethiopia, leaving no room for doubt that when content moderation systems fail, we are likely to see an associated increase in hate speech, disinformation, and incitement to violence.

This is dangerous. It not only jeopardizes people’s safety, but also potentially undermines democracy around the world. In 2024, an estimated 2.6 billion people will vote in national elections. Media reports suggest Meta is trying to implement ad-related measures to protect the U.S. elections. However, Meta has not disclosed any measures sufficient to protect people in non-Western, non-English speaking markets. Given the current lack of effective content moderation in these markets, voters will be even more vulnerable to hate speech, disinformation, incitement to violence, and real-world harm. 

In India, for instance, Meta has repeatedly refused to publish the full human rights iImpact assessment report it conducted on its preparedness for the country’s ongoing 2024 elections. And in Brazil, Global Witness conducted a test revealing Meta’s lack of measures for preventing the spread of election-related disinformation on Facebook. 

The European Union, meanwhile, has a law in place to compel companies like Meta to be transparent on content moderation practices: the Digital Services Act. But It seems the company has yet to demonstrate full compliance. The European Commission has initiated formal proceedings to assess whether Meta may have breached the law in three different areas, including deceptive advertisements and disinformation. 

HOW META’S SHAREHOLDERS CAN HELP

Under the United Nations Guiding Principles for Business and Human Rights, all businesses have a duty to conduct human rights due diligence and take action to mitigate harm if they cause or contribute to human rights violations. Investors also have this obligation. Furthermore, as the recently published U.S. National Action Plan on Responsible Business Conduct clearly  states, the U.S. government expects and encourages investors to conduct human rights due diligence “when considering investments in technologies that could enable or exacerbate human rights abuses.”

The shareholders who filed proposal eight are  acting as responsible shareholders should. Having identified a potential human rights risk, they are taking steps to mitigate those risks by asking Meta to disclose more on its efforts

“Meta has recognized the need to implement content moderation guardrails in the U.S. to guard against online hate-mongering and incitements to violence that could threaten election integrity and undermine democratic institutions,” says Anders Schelde, CIO of AkademikerPension, the proposal’s lead filer. “As Meta’s shareholders, we want to encourage the company to continue implementing these risk mitigations also in its key non-English speaking markets and also to be transparent about these actions to their stakeholders.”

OUR RECOMMENDATIONS FOR META AND META’S SHAREHOLDERS

First, Meta should acknowledge that inadequacies in content moderation can lead to serious harm, and at worst, loss of life. Then the company must take immediate action. 

Unfortunately, Meta has already published a statement opposing proposal eight, claiming that “[we] are an industry leader in human rights reporting.” Yet according to the most recent Ranking Digital Rights (RDR) Big Tech Scorecard, Meta scored the lowest on performance and disclosures, compared to peers like Google, Microsoft, and even X (formerly Twitter). In fact, RDR has recommended that Meta improve its transparency on content moderation since 2020.

Clearly, Meta needs an extra push from shareholders to improve on its human rights compliance, mitigate risks, and provide meaningful transparency — the kind that will actually provide value for Meta’s shareholders, regulators, and ultimately, the people who rely on Meta platforms around the world. We therefore strongly recommend that Meta shareholders vote for proposal eight, and that Meta reverse course and adopt it.