Why shareholders don’t trust Big Tech — and how to fix that

Big Tech is plowing ahead with the development and implementation of AI and AI-based tools, and companies’ AI blunders and mishaps are stirring heated debate on the human rights implications. Even though some of the media coverage may be inadvertently feeding the AI hype cycle, we’re also seeing critical, grounded calls for proper human rights safeguards, transparency, and oversight. The question is, are these important conversations also happening behind closed doors at corporate headquarters?

Here’s the good news: as we head into Big Tech’s Annual General Meeting (AGM) season, there are a number of shareholder proposals backing changes with the potential to transform the tech sector for the better. These changes are important for safeguarding human rights and mitigating the risks of technology, including those of AI. Below, we explain why. 

First, consider that the time is ripe for investor voices to be heard. Filing a shareholder proposal is often a last resort after investors have failed to get company engagement on an issue, or when a company has ignored issues shareholders have  consistently raised. There are nevertheless a large number of 2023 filings, demonstrating that shareholders are persistent and that momentum behind change is growing. Furthermore, a number of the proposals address roadblocks to transforming Big Tech corporate practices. All of this suggests shareholders are poised to make a positive difference. 

In our review of proposals for Amazon, Alphabet, and Meta, three core themes emerge:

1. Shareholders don’t trust companies to identify and mitigate human rights harms 

Amazon, Alphabet, and Meta have clearly failed to earn shareholder trust regarding how they conduct risk management and mitigation. 

  • Amazon shareholders have filed proposals requesting the company demonstrate heightened due diligence for technology that can enable surveillance and violate privacy rights, as well as show how it does due diligence to determine which customers it is willing to sell this technology to.
  • Meta shareholders are asking for the Board to commission an independent review, questioning the role and efficiency of the Audit and Risk Oversight Committee, which was established following the Cambridge Analytica scandal. They are also confronting Meta’s Board on why the company’s enforcement of its “Community Standards” has failed to control the dissemination of content that contains or promotes hate speech and disinformation, incites violence, or causes harm to public health or personal safety. 
2. Shareholders are demanding greater transparency around products, policies, and practices that actively undermine human rights 

It is clear that all three companies have also failed to demonstrate the capacity to adequately self-correct without oversight. 

  • Meta is under the microscope for its conflicting policies and lack of transparency around the company’s core business model — targeted advertising. Shareholders are proposing for the Board of Directors to “publish an independent third-party Human Rights Impact Assessment (HRIA), examining the actual and potential human rights impacts of Facebook’s targeted advertising policies and practices throughout its business operations.” 
3. Shareholders want to change voting structures to render companies more accountable to the broader investor community 

At both Meta and Alphabet, shareholders have submitted proposals directly requesting the removal of outdated and opaque voting structures to ensure all shareholders have equal voting powers.

Realistically speaking, where corporations like Meta and Alphabet employ dual-class voting structures — the Board and founders have multiple votes, while average shareholders do not — it is likely that the Board will turn down proposals for changing these structures. However, as the pressure continues to mount, it is clear that this bulldozing of shareholders’ voices will not quell the concerns or address the underlying issues within these companies’ risk management processes and overall governance. 

How companies should respond: address the issues that increase risks 

These proposals all indicate significant potential issues in these companies’ commitment and ability to lead technology development in a transparent and trustworthy manner. The foundational governance structures and business models these companies rely on appear to be broken and are already yielding harmful results. Unless Big Tech steps up to address these concerns, rapidly deploying new technologies will only exacerbate and increase the scale of those harms. 

As AGM season unfolds over the next few weeks, we will share specific information on the key shareholder proposals we support. With competition in AI accelerating, it is not only the shareholder responsibility, but that of regulators, governments, end users, and societies at large to demand transparency and accountability from Big Tech. Addressing the human rights risks of AI and other technologies now is imperative for everyone. Otherwise, as this technology proliferates, at-risk individuals and communities around the world will continue to bear the burden of faulty products — including rights-harming surveillance, disinformation, hate speech, and incitement to violence — exacerbating the already dangerous issues plaguing our digital spaces.