Billions of people worldwide use private messaging platforms like Signal, WhatsApp, and iMessage to communicate securely. This is possible thanks to end-to-end encryption (E2EE), which ensures that only the sender and the intended recipient(s) can view the contents of a message, with no access possible for any third party, not even the service provider itself. Despite the widespread adoption of E2EE apps, including by government officials, and the role of encryption in safeguarding human rights, encryption, which can be lifesaving, is under attack around the world. These attacks most often come in the form of client-side scanning (CSS), which is already being pushed in the EU, UK, U.S., and Australia.
CSS involves scanning the photos, videos, and messages on an individual’s device against a database of known objectionable material, before the content is then sent onwards via an encrypted messaging platform. Before an individual uploads a file to an encrypted messaging window, it would be converted into a digital fingerprint, or “hash,” and compared against a database of digital fingerprints of prohibited material. Such a database could be housed on a person’s device, or at the server level.
Proponents of CSS argue that it is a privacy-respecting method of checking content in the interests of online safety, but as we explain in this FAQ piece, CSS undermines the privacy and security enabled by E2EE platforms. It is at odds with the principles of necessity and proportionality, and its implementation would erode the trustworthiness of E2EE channels; the most crucial tool we have for communicating securely and privately in a digital ecosystem dominated by trigger-happy surveillance.
Does client-side scanning protect privacy, freedom of expression, or related human rights?
Simply put, no. Proactive content detection and privacy protection cannot co-exist on an E2EE platform. While CSS may technically preserve some E2EE properties, as it occurs prior to a message being encrypted, it voids E2EE’s entire purpose, and promise, of preserving confidentiality.
In so far as it concerns people’s privacy and security, CSS’ ability to circumvent E2EE is effectively the same as installing a backdoor allowing access to encrypted data that would not otherwise be possible. The right to privacy is only as good as our ability to avail of it, but CSS would compel platforms to alter their architecture and betray “privacy-by-design” principles that allow people to exercise this right and others.
Does client-side scanning make the internet safer?
No, it does not. While making the internet safer for vulnerable and marginalized people is a legitimate aim, CSS as a means cannot justify that end. Whether scanning occurs on the device or on a server, CSS allows for content shared via E2EE to be discovered in some form. This expansion of the attack surface makes the internet less secure, not more. Researchers have demonstrated that hashes can be manipulated to force or evade matches, meaning malicious actors could exploit the detection mechanism to hide material, collect sensitive data, or frame innocent people. This threatens national security and economic stability as much as it threatens human rights.
When implemented at the server level, CSS enables decoding of content in the server — the absence of which is a key feature of E2EE platforms — and deprives people of confidentiality. Storing the database on a device, which contains granular, sensitive personal information including media, notes, search histories, banking information, and medical data, is a debilitating attack on privacy. Such a database could be modified and controlled by an external entity, without any user control; essentially converting any personal device into a potential “bug in our pocket.”
There is also a major risk of mission creep and setting dangerous precedents. Even if CSS is only introduced to identify a particular type of content, it would simply be a matter of time before the technology was used by authoritarian governments to squash political and artistic material, or to target dissidents, journalists, and human rights defenders. Scanning private messages is a sure fire way to chill free speech, and one that disproportionately impacts vulnerable communities.
A privacy-friendly surveillance tool has never existed, and CSS is no exception. It would indiscriminately scan all messages sought to be uploaded, without any suspicion or warrant, meaning that anyone using an encrypted channel is treated as a potential criminal. The Council of the EU’s own Legal Service has emphasized that it could lead to mass surveillance, undermine encryption, and violate the right to privacy and data protection, while several former U.S. national security and law enforcement officials have highlighted how the widespread use of encryption is crucial for securing their country’s digital infrastructure. Any detection capability could be exploited by bad actors, including across borders, making it a national security nightmare in waiting.
Is client-side scanning even technically feasible?
In many countries, such as the UK and Australia, legislative proposals for online safety require platforms to implement a host of measures, but with exemptions for implementing certain systems if doing so is not “technically feasible.”
There is currently no known, technically feasible method for implementing CSS. E2EE platforms are designed to be incapable of detecting content, and no such function can be enabled without altering the platform’s architecture in order to introduce a “vulnerability” or “systemic weakness” that undermines privacy, which at least one law requiring proactive detection specifically prohibits.
Technical feasibility assessments of CSS have found that it has a high error rate, frequently returning false results. Such false positives can be devastating, as seen when one father was flagged as a possible child predator after he shared pictures of his child with a doctor while seeking medical advice. CSS fails the necessity and proportionality test; a technology that doesn’t work cannot be said to be necessary, while the indiscriminate targeting that it would enable is inherently disproportionate.
One European Parliament impact assessment study confirmed that there are currently no CSS solutions that do not result in high error rates, concluding that CSS would undermine E2EE and the security of communications overall. Meanwhile the same parliament’s Civil Liberties committee voted against a CSS proposal that would have rolled out mass scanning across Europe, on the basis that general monitoring of people’s private communications is prohibited and E2EE services are not subject to scanning technologies.
Is there any alternative to client-side scanning for law enforcement purposes?
Combating child sexual abuse and other crimes against vulnerable people is a complex problem that requires complex solutions. While digital evidence can be helpful to law enforcement investigating such abuses, undermining E2EE services for everyone to obtain evidence is a disproportionate measure associated with increased privacy and security risks, including hacking and other kinds of data compromise by third parties.
Relying solely on technology risks placing even more power in the hands of platform owners, including Big Tech companies, empowering them to further intrude on people’s privacy and collect more sensitive data, without transparency or accountability. It is vital to avoid tech-solutionism as a knee-jerk method for tackling complex social problems that have a range of causes. Keeping vulnerable people, including children, safer online requires a multi-pronged approach.
It’s essential to improve people’s ability to report illegal content and get redress, both through education and awareness-raising, as well as changes to platform design and the creation of robust trust and safety mechanisms. And we also need to fund capacity- and skills-building for social workers, to help them understand and educate others about the different benefits and risks of technology and how to guard against the latter. Similarly, we need such measures to help law enforcement better triage publicly available data, thus eliminating the need to scout for personal data in ways that undermine the privacy and security of the digital infrastructure that we all depend on.
Experts have shown that fears over encryption preventing police from intercepting communications are overblown. On the contrary, we live in a “golden era” of surveillance, with law enforcement already able to legally access a wealth of unencrypted data.
Finally, even if CSS were able to help identify malicious actors, it simply fails all tests of necessity and proportionality required to justify any infringement of fundamental human rights. While entering your home without a warrant might make law enforcement’s work easier, it is forbidden precisely because this would be an unnecessary and disproportionate human rights violation — and the same principle must be applied to govern law enforcement’s access to your online life and communications.
Why must we reject the false binary between privacy and security?
Despite regulators’ attempts to argue the contrary, attempting to place privacy and security in opposition to one another only further undermines online security. Platform accountability measures are, of course, needed to ensure safety and protection of rights. But targeting encryption is missing the wood for the trees. No matter how well-intentioned, any online safety regulation that undermines encryption is legislative wishful thinking. Encryption, privacy, and safety go hand-in-hand, and it is high time we shelved proposals that threaten this relationship.