India’s Digital Personal Data Protection Bill passed: “it’s a bad law”

It’s not a bug, it’s a feature: How Cambridge Analytica demonstrates the desperate need for data protection

Reports from The New York Times and The Guardian show that Cambridge Analytica used enormous datasets of personal information from Facebook to advertise to micro-targeted voters in the U.K. and the U.S. The information had initially been obtained from Facebook through a researcher, and then reportedly sold to Cambridge Analytica. Facebook says this practice was a violation of their terms of service, but the incident raises important questions about data protection in the age of data harvesting.

What’s happened so far

This past weekend, The New York Times and The Guardian published stories about Cambridge Analytica, a controversial “data analytics” company (though they clearly do a lot more), and its relationship with Facebook. The story begins in 2014 when a group of social scientists led by Aleksandr Kogan created and deployed a personality test called “thisisyourdigitallife” via a Facebook app. This app allowed  researchers to access personal information not only about app users but also their Facebook friends. These friends had not used the app and therefore could not have consented to the use of their data. This feature allowed Kogan and his team — along with potentially any other researcher with similar access — to harvest the information of a vast network of Facebook users. In this case, reports indicate that 50 million people could have had their data mined by Kogan (a couple hundred thousand “consenting” users and all of their contacts).

In the background, Global Science Research (GSR), Kogan’s company, had contracted to disclose the data he collected to Cambridge Analytica, which had invested in advertising for the app to increase the number of users who authorized its use. Cambridge Analytica analyzed and used the data to create and purchase highly targeted ads that were used for the 2016 U.S. presidential elections, as well as potentially for other high-profile elections and debates. Company executives have claimed that Cambridge Analytica has been involved in elections around the world, including the U.K., Argentina, India, Mexico, Nigeria, Kenya, and the Czech Republic.

In 2015, after but not necessarily related to this incident, Facebook changed its rules to prohibit app developers from accessing the personal information of friends of app users. Around the same time, they asked Cambridge Analytica to delete the information that had been obtained through the personality test app which was originally intended to be used as data for social science research. The company purportedly certified to have deleted the information. It is unclear whether Facebook took any meaningful steps to verify this was actually the case.

Late last week, Facebook suspended the accounts of Cambridge Analytica and its parent company, Strategic Communication Laboratories (SCL), due to whistleblower reports that the information was never deleted and was left insecure. At the same time, the whistleblower’s account was also suspended, as were accounts of Aleksandr Kogan and the group page of Cambridge Analytica. Facebook also warned journalists reporting on the story that the company could potentially take legal action.

How could this happen?

At the time that GSR gained access to the data in this case, Facebook allowed access to large amounts of user data, including the data of people connected to app users even if they didn’t use the app themselves. That is how Kogan had the authority to access the personal information of 50,000,000 people. For reference, that is roughly the size of the population of England. It is greater than the population of every South American country except Brazil. It is more than double the population of Australia. It is also more than the populations of Virginia, North Carolina, Michigan, Georgia, and Ohio, combined.

In this system, the only thing preventing abuse of that information globally was a contract: the terms of service. Cambridge Analytica acquired the data from GSR not because of a security flaw, but because that is how the infrastructure was built to work at that time. That’s why this incident is not a data breach, nor a hack. It’s the foreseeable consequence of a common business model: the widespread (over) collection and processing of personal information to create Facebook users profiles, in particular to generate better ad targeting.

This is not the first time that the Facebook platform has been misused to the detriment of end users. Reports of incidents go back years, and go from third-party abuse to damaging social experiments. There are armies of advocates and researchers that have warned about possible misuse of Facebook for years.

Human rights and corporate responsibility

Companies have the responsibility both to know about the impact of their products and services on human rights, by conducting due diligence and working with outside stakeholders, and to show they are taking measures to prevent and mitigate any adverse effects. In this case, it is evident that Facebook knew about the possibility for misuse. It has also taken steps to show its commitment to respecting user rights, since it made changes to its platform access policies after the initial incident with GSR in 2014. But questions remain. Are there other copies of this information that Cambridge Analytica has? Have individuals in other countries been adversely affected by these profiling tactics? How many other apps could have misused Facebook’s API? And, as we look at the big picture, what will Facebook do to address the fact that incentives remain to push these policies to their outer limits? This is what happens when an internet business model is driven by massive collection, analysis, and, as we see here, abuse of user data. How Facebook responds is important given that the company and several other U.S. tech companies have been explicitly and aggressively building out sales and product efforts around elections and governance globally. In other words, they are explicitly seeking to make themselves more important to democracy around the world, yet at the same time they are not doing enough to protect users in the process, in particular from abuse or misuse of their data.

Understanding the causes to offer solutions

We produce digital footprints at an alarming rate. Almost everything we do online or off can be — and often is –- tracked. With the dawn of the internet of things, there are even more footprints, and this means that companies are building troves of personally identifiable information at ever-increasing rates.

This scenario calls for a radical change in the way we perceive the protection of personal data. Contractual terms are not enough to provide adequate prevention, mitigation, protection, and redress even for normal use of a platform like Facebook, much less for data misuse and abuse. Companies should be building in mechanisms for oversight and transparency as well as security and data protection by default when they develop platforms and functionalities, and keep working continuously to prevent abuse. They also have to do a better job choosing data sharing partners and holding them accountable, using both technical and operational measures to prevent data abuse. Ultimately, if you can’t demonstrate that you can protect data, you should not collect it, market it, or sell it to third parties.

This isn’t just our wishlist. Companies have international obligations to prevent, mitigate, and provide redress when human rights are violated – and this includes the right to privacy, which is linked to the right to data protection (which stands as a separate right in the European Union). In the tech sector, transparency is a key driver of trust with users, and it also can help companies strengthen internal processes. Facebook should take heed of the Ranking Digital Rights indicators and forthrightly disclose information during crises like this one. This can help Facebook and other firms right their ships right now, preventing yet another multi-billion dollar data Valdez that does significant damage to society. A post-hoc cover-up looks worse than the mistake, and degrades trust.

Governments have a responsibility to do their part as well. In countries where data protection laws exist, lawmakers should be pushing to strengthen them. In countries where a data protection framework is missing or insufficient — like the U.S. — it’s time to start building.

What happens from here

Even before the news broke about the Cambridge Analytica incident, Facebook was under increased regulatory scrutiny. In 2011 the U.S. Federal Trade Commission (“FTC”) entered into a consent decree (“Order”) with Facebook after investigating allegations of deceptive trade practices. The order required Facebook to “protect the privacy and confidentiality of consumers’ information” and engage in independent privacy audits every two years. But enforcement of this order has been lax. Now the FTC says it will look at what happened to determine whether Facebook has run afoul of the order, which could lead to fines and further action. The company also faces scrutiny from other government bodies, in the U.S. and around the world. There are now investigations launched by Data Protection Authorities from Australia to the UK, and in the U.S. Congress, the E.U. Commission, and the E.U. Parliament.

We hope to see lawmakers and regulators respond quickly to this particular incident, but we also think it’s critically important to see the big picture and work toward an online ecosystem that does not depend on harvesting users’ data instead of protecting it.

If the Cambridge Analytica scandal broke in the E.U. three months from today, Facebook, Kogan, and Cambridge Analytica would likely have faced charges and heavy fines under the General Data Protection Regulation. The GDPR is the latest — and probably the most comprehensive — data protection framework in the world. But more are coming.

Countries like Tunisia, Japan, Argentina, Australia, Jamaica, and others are considering new data protection laws or upgrading their frameworks. An expert committee in India is currently deliberating on a data protection and privacy regime for the next billion users of the internet. The committee was formed in the backdrop of important privacy and data protection cases that are being heard before a constitutional bench of the Supreme Court of India, on India’s national identity program, “Aadhaar” and the legality of transferring users’ data between WhatsApp and Facebook.   

Notably missing from the list of countries creating a federal framework for data protection is the U.S., where many leading technology companies are headquartered. As the global tide continues to shift and users demand more transparency and redress, where will the competitive advantage reside? What innovations will we see as Europe’s model for protecting data is replicated around the world, and more companies adopt a “privacy by design” approach? Will the U.S. be left behind?

Based on our experience engaging with lawmakers in Europe on creating the GDPR, Access Now developed a guide with do’s and don’ts for building a data protection framework. These are the laws that would help address the Facebook/Cambridge Analytica problem, prevent data misuse and abuse, and give people avenues for redress when their rights are violated. Our hope is that the U.S. Congress and other lawmaking bodies around the world will make use of the guide. As Tim Berners-Lee, credited as the creator of the World Wide Web, has observed, platforms like Facebook have enabled the web to be weaponized at scale, to the detriment of users’ rights and the health of our democracies. Data protection is vitally important for disarming that capacity and creating a better future for everyone.