Corporate transparency Social card

Meta shareholders to company leaders: your business model hurts human rights

Meta has just been hit with a record €1.2B privacy fine for violating the European Union’s data protection laws. But it’s clear that fines alone have not been enough to trigger significant changes to the company’s surveillance-based targeted advertising business model. In the lead-up to Meta’s annual shareholder meeting on May 31, we look at proposals from Meta shareholders aimed at addressing the company’s harmful business practices and ensuring it is held accountable for its human rights impact around the world. 

As we’ve noted in our previous posts in this series (see posts one and two), the big issue for Meta shareholders is trust.  Among the three Big Tech companies we’re focusing on in the series — Amazon, Meta, and Alphabet — Meta got the most proposals by far, demonstrating the gravity of its continued failure to show that it effectively anticipates, identifies, and mitigates risks. Following are four proposals to address this failure.

PROPOSAL 5: REQUIRE THAT META CONDUCT A THIRD-PARTY HUMAN RIGHTS ASSESSMENT OF ITS CORE BUSINESS MODEL: TARGETED ADVERTISING 

With Proposal 5, Meta shareholders are asking the company to publish an independent third-party Human Rights Impact Assessment (HRIA) evaluating the actual and potential human rights impacts of Facebook’s targeted advertising policies and practices throughout its business operations.

The company published its first-ever human rights report in 2022, and in the company’s proxy statement, Meta  has responded to Proposal 5 by listing  a number of actions it has taken in recent years to improve its human rights performance. However, this response is inadequate, as Meta refuses to address the human rights impact of its core business model, targeted advertising. None of the actions it lists are directly related or appear in any way relevant to the targeted advertising business model, which is the main concern with this shareholder proposal.

For a company to demonstrate it takes its human rights commitments seriously, it cannot ignore the impact of its core business model.That is why we strongly recommend that Meta shareholders vote FOR Proposal 5.

PROPOSAL 7: REQUIRE THAT META PUBLISH THE FULL CONTENT OF ITS HUMAN RIGHTS IMPACT ASSESSMENT ON INDIA

With Proposal 7, Meta shareholders are asking the company to provide more transparency on its operations in India, including publishing the full content of the Human Rights Impact Assessment it conducted in 2019. 

Meta’s approach in publishing HRIA reports took a nosedive with the India HRIA report. The company published only a very high level summary, whereas for previous HRIAs on Myanmar and the Philippines, the company had published the reports in full, including detailed lists of  the findings and recommendations, as well as plans for implementing them. The India HRIA summary did not contain any such details, making it very inconsistent with the company’s standard approach — suggesting the company is concealing the extent to which it may have facilitated human rights violations. 

“Meta shareholders are rightly alarmed by the fact that Facebook may have functioned as a critical catalyst for religious violence in India, allowing for the dissemination of anti-Muslim hate speech, and failing to flag posts or speakers who inflame and incite, posing enormous human rights risks,” says Maen Hammad, Tech Accountability Campaigner at Ekō, one of the lead proponents for Proposal 7. “Meta’s lack of transparency concerning India is a clear and present danger to the company’s reputation, operations, and investors.”

To fully understand the risks, shareholders must have full transparency on all external assessments conducted on Meta’s operations in India. We strongly recommend that Meta shareholders vote FOR Proposal 7.

PROPOSALS 10 & 13: REQUIRE THAT META REPORT ON ITS ENFORCEMENT OF “COMMUNITY STANDARDS” AND GET EXTERNAL ASSESSMENT OF ITS AUDIT & RISK OVERSIGHT

With Proposal 10, Meta shareholders are asking more information from the Board about why the company’s enforcement of “Community Standards” has failed to control the dissemination of content that contains or promotes hate speech and disinformation, incites violence, or causes harm to public health or personal safety. With Proposal 13, shareholders are asking for an independent performance evaluation of  the role and efficiency of the Audit and Risk Oversight Committee — established following the Cambridge Analytica scandal. In addition to standard financial auditing duties, the Committee is tasked with an oversight of “certain risk exposures of the company.”

Year after year, Meta confronts the same issue: failure to prevent the dissemination of harmful content on its platforms. The company has taken some important steps forward on human rights, such as publishing a human rights report. But this is far from sufficient to meet its human rights obligations or to demonstrate that it is addressing the most harmful impacts of its operations.

The bottom line: it appears Meta is trying to put a bandaid on something that is fundamentally broken — its foundational governance structures and business models. It is not possible to fix these persistent problems with hateful content and political manipulation of the platform without addressing the root causes — no matter how much money the company spends on things like memberships in tech sector organizations, or even how much effort goes into extensive disclosures that may be related to the issues at hand, but do not include steps to directly address specific problems. 

This is an opportunity for Meta shareholders to send a strong message to the executive management that the status quo is not acceptable. We recommend all Meta shareholders vote FOR Proposals 10 and 13. 

WHAT’S NEXT

Stay tuned for the final post in our series on Big Tech, which will focus on Alphabet