Private tech, humanitarian problems: how to ensure digital transformation does no harm

Private tech, humanitarian problems: how to ensure digital transformation does no harm

People experiencing vulnerability as a consequence of conflict and violence often rely on a small group of humanitarian actors, trusted because of their claims of neutrality, impartiality, and independence from the warring parties. They rely on these humanitarian organisations and agencies for subsistence, protection, and access to basic services and information, in the darkest times in their lives. Yet these same actors can expose them to further harm. Our new report, Mapping Humanitarian Tech: exposing protection gaps in digital transformation programmes, examines the partnerships between humanitarian actors and private corporations. Our aim is to show how these often-opaque partnerships impact the digital rights of the affected communities, and to offer recommendations for keeping people safe.  

The context: a data-for-aid dynamic

The relationship between humanitarian actors and the communities they serve has evolved to include an increasingly transactional dynamic: people are asked to share data about themselves, their families, and their needs, and based on that information, the organisation or agency provides some of the assistance that is needed. As a result, over the past century-and-a-half, teams of well meaning people have flocked to disaster-affected areas to conduct long interviews and assessments, often collecting the same data over and over again from the same exhausted and confused communities. 

The current digital transformation of humanitarian response has been hailed as a sign of improvement and modernisation that was long overdue in a sector often accused of being conservative and inefficient. This process has strengthened the partnership model between humanitarian agencies and private sector actors, and especially tech companies. 

What remains unclear is the full extent of these partnerships and the status and impact of the aid-tech relationship. In any given partnership, we should be asking: What are the specific terms of the humanitarian actor’s agreement with a tech company? What role does data play in it? Who controls the data and metadata at each stage of the pipeline? Which third parties are involved, and what access do they have to sensitive data? Is any of that data sold? Do all actors involved have a policy with regard to the partnership, and for what? If the parties do harm, are there pathways for recourse or remedy for affected people? 

The investigation

Our research, which entailed extensive desk research and 45 semi-structured interviews with experts from humanitarian organisations, tech companies, the research community, and the public sector, sought to answer some of these questions. The investigation revealed that at least 220 companies are engaged in more than 50 major humanitarian partnerships or initiatives, and that there are at least 14 membership-based platform brokering deals between the two sectors. Further analysis showed that these partnerships are typically opaque, increasingly consolidated in few hands, deal in the data of the world’s most vulnerable people, and provide fertile ground to greedy data brokers and intermediaries.

Our inquiry focused on areas of technology that have important implications for digital rights, namely: 

  • Sensitive data management and communication; 
  • Digital ID and biometrics; 
  • Connectivity and cybersecurity; and 
  • Advanced analytics, AI, and cloud processing. 

Private-sector companies and humanitarian agencies are converging, and in some cases even swapping roles, with serious implications for the digital rights of people and communities in extremely vulnerable situations. 

Our discussion includes analysis of the following data points and developments:

  • the widespread lack of public disclosure on key issues such as procurement, data protection impact assessments (DPIAs), and data-incident reporting;
  • data showing that humanitarian organisations still do not essentially rely on corporate actors as main enabling partners for responding to complex crises;
  • patterns of cross-hybridisation among humanitarian entities and tech companies, with both sides progressively taking up functions belonging to the other group;
  • evidence that connectivity is still not considered as an essential asset or priority in humanitarian response, with partnerships mostly limited to functional provision to international organisations and their direct partners;
  • the normalisation of data-hungry systems, such as digital identity and biometrics programmes, as a prerequisite for digital access or even humanitarian assistance; and
  • the intricate and controversial tradeoffs and ethical challenges in developing predictive analytics models and AI tools for corporate systems, among other key topics. 

In addition to highlighting key trends and concerns emerging from the data, we identify areas for further investigation, and offer recommendations for donors, the humanitarian community, tech companies, and local actors and communities.

Conclusion: we need more transparency and human rights safeguards 

Despite the challenges we identify, our research confirms many of the positive claims humanitarian aid actors make about how they are improving data and tech management, and we were able to dispel some of the stereotypes and clichés surrounding the digital transformation of humanitarian response. The picture that emerges shows a humanitarian sector that is experimenting on the cutting edge of technology, pushing the boundaries of technical development in extremely harsh conditions, and navigating seemingly impossible challenges while balancing high stakes and risks. 

That said, the collaboration between aid organisations and companies is also bringing emerging and untested technologies to volatile contexts without sufficient transparency, due process, protection protocols, or recourse mechanisms. As we discovered, humanitarian tech partnerships seem to dodge normative and regulatory data protection frameworks by claiming the extra-legality and extraterritoriality of the technology or the implementation, by taking advantage of the secretive nature of their agreements, and sometimes of their immunities. These gaps leave people participating in aid programs without recourse or oversight which might allow them to enforce their human rights.

Our findings also support and expand on the concerns raised by human rights and humanitarian advocates, especially from the Global Majority, who see a humanitarian tech landscape that is exclusionary and extractive by design. The actors playing an active role are few, mostly from Global Minority countries, and consolidating over time. This results in de facto control over aid technologies and data by a worryingly limited number of companies. In addition, these tech actors are almost always providing similar or related services to military or law enforcement agencies in the very same areas and even places where they contribute to humanitarian support, with no clear guarantees on the distinction between sensitive and commercial data or safeguards to screen for conflicting interests or prevent regulatory capture. 

More broadly, we see two converging processes unfolding: the transformation of some aid actors into tech services providers, and the transformation of tech companies into de facto humanitarian service providers. This is happening without these entities taking up the responsibilities and duties that regulate their expanded area of intervention, resulting in serious gaps in accountability. 

Humanitarianism has the potential to benefit from the constant improvement of the tools to improve the lives and protect the dignity of people experiencing vulnerability. But we cannot achieve those aims with an approach that treats the most vulnerable as test subjects for often-exploitive technologies, creating long tails of data with uncertain impacts. Through the lens of human rights we have learned the hard way how harmful digital transformation is when it’s based mostly on narrow market incentives, with no oversight. We see a better path.

As we release our report with key recommendations, Access Now calls for effective and concrete actions by humanitarian aid actors, tech companies, donors, and the entire humanitarian community. We hope to see transparency, human rights protections, and principled humanitarian approaches become the top priority in responding to the needs of the most vulnerable people and their communities.