Virtual Integrity: Three steps toward building stronger cryptographic standards

Between 15th-19th of September, 2014, in the week leading up the first year anniversary of the International Principles on the Application of Human Rights to Communications Surveillance, Access and the coalition behind the 13 Principles will be participating in a Week of Action to help explain the foundation for the principles. Every day, we’ll take on a different part of the principles, exploring what’s at stake and what we need to do to bring intelligence agencies and the police back under the rule of law.

You can read the complete set of posts at:

Surveillance laws can no longer ignore our human rights. Follow our discussion on twitter with the hashtag: #privacyisaright


As the International Principles on the Application of Human Rights to Communications Surveillance make clear, the preservation of the integrity of communications and systems is a key obligation under international law. A report providing legal background on the Principles explained,

Just as it would be unreasonable for governments to insist that all residents of houses should leave their doors unlocked just in case the police need to search a particular property, or to require all persons to install surveillance cameras in their houses on the basis that it might be useful to future prosecutions, it is equally disproportionate for governments to interfere with the integrity of everyone’s communications in order to facilitate its investigations or to require the identification of users as a precondition for service provision or the retention of all customer data. 

Despite this, as first revealed by the New York Times, the Guardian, and ProPublica and in direct contravention of the Principles on Systems Integrity, the U.S. National Security Agency has purposefully worked to undermine the security of the internet in order to preserve its own surveillance capabilities. And this isn’t only an NSA problem. While the NSA has now been caught with its hand in the cryptographic cookie jar, other governments are likely seeking the same opportunities to insert vulnerabilities that they may later capitalize upon.

Any interference with general-use encryption standards not intended solely to correct vulnerabilities or otherwise increase the strength thereof is a facial violation of the Integrity Principle. This type of interference was not always necessary. Once upon a time, a nation’s enemies communicated in secret codes that only they knew, and that only they used — you might say it was an early example of proprietary software. So when states attempted to crack one another’s communications, they used their own cryptographic experts who were skilled cracking closed code, and typically did not involve members of the general public.

However, today encrypted digital communications are typically built upon the same open protocols, whether they are sent by a spy agency, a major corporation, or the (tech-savvy) old lady who lives up the street. Vulnerabilities that attack the security of one of these communications will necessarily impact the others. Weaknesses in encryption algorithms are akin to “back doors” into software, programs, and databases. The problem is, even if you trust without reservation the entity building that back door for its own use (which I would strongly caution against), these doors are also exploitable by other actors, be it overreaching governments, authoritarian regimes, or unaffiliated bad actors.

Encryption algorithms form the foundation of a secure internet, which in turn is the basis for personal communications and social networking, e-commerce and banking, news consumption, academic research, and just about every other major use of the internet. It is therefore important to make sure that they are as strong and secure as possible. How do we make sure this happens? What solutions are necessary to protect the sanctity of private communications and to ensure that governments don’t run afoul of the Integrity Principle?

Step One: Keep your lock-makers away from your lock-breakers

The NSA has two missions. The first, Signals Intelligence (or “SigInt” in sexy spy lingo), is the most well known, and it is the mission under which the NSA “collects, processes, and disseminates intelligence information from foreign signals for intelligence and counterintelligence purposes and to support military operations.” The NSA is also charged with Information Assurance (wisely shortened to simply “IA”), under which it “defends national security information and information systems.” As such, in the simplest terms, the NSA is tasked both with protecting and breaking into sensitive databases.

You should also know a bit about another federal agency, the National Institute for Standards and Technology, or NIST. NIST, as its name suggests, develops standards for different types of technology. This includes standards for encryption technologies — the standards on which secure communications and transactions across the internet are based. Laws that form the basis for NIST’s existence explicitly require the agency to consult with the NSA in the establishment of these cryptographic standards.

Presumptively, in the process of the mandatory consultation process between NIST and the NSA, sometimes the NSA’s SigInt mission takes precedence over IA, resulting in holes in encryption algorithms at least large enough for the NSA’s spy-bots to crawl through. U.S. Representative Alan Grayson and other lawmakers have introduced legislation to remove the mandatory requirement for NIST to consult with NSA (though still permit the consultation) and strictly prohibit the NSA from artificially weakening standards. Access applaudes both of these provisions and calls upon the U.S. Congress to quickly implement them and for other lawmaking bodies around the world to consider similar measures to ensure adequate separation between conflicting missions.

Step Two: Like we learned in elementary school, cryptographers must show their work

In 2014, NIST published a document that publicly established six core principles to guide the establishment of future cryptographic standards: transparency, openness, technical merit, balance, integrity, and continuous improvement. The document was intended to restore public trust in NIST, something the agency has suffered a deficit of in the wake of the NSA revelations.

Access and a coalition of civil society organizations and companies commended NIST for the document and provided additional comments on improvements that could be made to the framework, included the addition of a seventh principle, usability, to ensure that standards that are mathematically sound do not become unsafe due to user difficulties.

These public standards and guidelines provide a source for public oversight and accountability. However, more transparency is necessary in order to fully alleviate public concern. For example, the standards-setters should be required to provide “security proofs,” providing a basis for independent analysis and evaluation of the technical merit thereof. In addition, so long as standards-setters are required to consult with spy agencies, it is vital that those communications be public in order for independent experts to judge if and when the standards are being undermined.

Step Three: Give the gold to the people making the rules

A second coalition letter, signed by 30 companies and organizations and 5 noted technical experts, including Eleanor Saitta and Jacob Appelbaum, specifically explained that the law must empower a civilian agency to perform the information assurance functions, like those tasked to the NSA.

Any country that works on information assurance should empower such an agency, independent from any other agency that serves any surveillance function. The new agency should receive its own, adequate funding and resources. In addition, the agency should be empowered with sufficient technical expertise to allow it to operate independently rather than clinging to other experts like a life preserver in order to serve its established function. Without these changes, it is unlikely that users will be able to fully trust much of the internet’s infrastructure, and few can feel truly secure in the privacy of their personal transactions.


These steps are not going to solve all of our cryptographic problems. For example, subsequent revelations have shown that companies may have been financially incentivized to use the weakened algorithms in products or services. However, by ensuring an independent, public, and transparent process to create technical standards we will have taken a huge step toward compliance with the Integrity Principle. The future of privacy on the internet depends on a strong core digital integrity that allows us to communicate and transact business in private.