U.S. Congressional briefing

Australia’s plans for internet regulation: aimed at terrorism, but harming human rights

Since the tragic attack on two mosques in Christchurch, New Zealand earlier this month, Australian leaders have raised concerns that social media platforms have become facilitators, if not full-on enablers, of the spread of terrorist ideas and content. There is criticism that the removal of videos, accounts, and forum discussions by giants such as Facebook and YouTube was too slow and clunky, and smaller platforms were proactively blocked by internet providers in lieu of their own action.

The government appears to be considering far-reaching criminal sanctions if social media executives do not comply with newly planned measures to address the problem.

Unfortunately, the assertions made by members of Australia’s government to the media conflate a number of distinct issues, none of which could easily be resolved with a unified regulatory approach. The central issue for Prime Minister Scott Morrison at the moment appears to be the use of live streaming and the rapid continued spread of video content – as well as potentially text – through social media. However, in the same statement, Prime Minister Morrison raised the issues of fake news, online bullying, and the deficient mechanisms for enforcement of the Privacy Act. This kind of mission creep presages dangerously broad or imprecise regulation of the internet and social media.

None of this is to deny the horror of what took place. However, if careless regulation is  rushed through, it will almost certainly have a long-term, negative impact on freedom of expression, a concern we recently highlighted in our submission to the Australian Competition and Consumer Commission. Writing sound policy to address challenges linked to online speech (even “terrorist” content) requires a carefully considered, measured, and proportionate approach.

In this post, we take a step back to look at the issues the Australian government is raising, one by one. The government should do the same.

Making terrorist content illegal is a challenging exercise – caution is necessary

It is entirely legitimate for the Australian government to protect its citizens from the dissemination of terrorist content online, but the government will do citizens a deep disservice if it recklessly undercuts the most treasured principle of a functioning democracy, freedom of expression, along the way.

Defining what exactly is meant by “terrorist content” is a crucial challenge for lawmakers around the world, in order to carefully calibrate legislation that impacts online speech. It must be done right. The government seems to be considering use of “abhorrent violent material” as a working definition. A lack of clarity here is dangerous and threatens to undermine the rule of law. Vague or overbroad definitions of terms like “extremism” or “violent extremism” can easily build the foundation for human rights violations and put vulnerable communities at risk. This risk is not theoretical. Consider the plethora of governments that are using “extremism” to silence their critics and opposition.

Lessons can be learned from abroad. The European Union is currently discussing a proposal to prevent the dissemination of terrorist content online, too. The proposal raises numerous concerns about how it could lead to disproportionate restrictions on freedom of expression. Democracy advocates are up in arms, and rightly so.

The Australian government should engage in a measured, paced reflection about how to achieve its legitimate public policy goals. In doing so, it should draw inspiration from the recommendations of UN special mandate holders and civil society to stay away from proactive measures that can manifest in upload filters, unclear and overbroad definitions for terrorist content and host service providers, and other inadequate and disproportionate measures. For a number of years now, we have outlined additional recommendations on how to counter violent extremism online.

Taking down terrorist content won’t necessarily solve the problem

Increasingly, governments around the globe are looking to social media platforms for fast solutions to large-scale systemic societal issues. However attractive such a direct approach of “remove it or we fine you” may be, research has shown that the removal of content (images, videos, text, links), or even entire networks that host it, can simply result in displacing it to other spaces.

Also, worryingly, removal of content without appropriate safeguards may risk destroying crucial evidence and efforts to monitor and report on human rights abuses.

What might be disturbing is not necessarily illegal

Over the last two weeks, Australia’s social media has been captivated by the Tayla Harris case, and for a good reason. The way attackers bullied this female athlete using vulgar, inappropriate comments was outrageous, and it was also disturbingly representative of how female athletes are treated on a regular basis. While this example is very concerning, such online behavior should not be conflated with “terrorist content.” The publication of such content is not always illegal and should not be treated as such in every case.

What governments do not declare illegal, like this type of bullying or other aggressive speech, social media platforms can define in their Terms of Service (TOS) or Community Guidelines, a tool which can help set standards of conduct on a platform.

While it is completely acceptable for the government to have a conversation with internet platforms about their TOS, they should not seek to shift the responsibility of judging what is or is not acceptable onto private actors. Tech companies neither have the mandate nor the competency to engage in such practices.

Australia is bound to international human rights law, and it should therefore remember that any measure it enforces upon social media platforms that impacts the ability of users to freely express themselves and access information is subject to limitations and basic principles of legality, necessity, proportionality, and legitimate aim, including those specified by Article 19 of the International Covenant on Civil and Political Rights.

So-called “fake news”- systemic solutions to disinformation are needed

Legislators from China, to France, to Iran, to the U.S. have drafted laws and rules meant to “crack down” on “fake news,” properly called disinformation, pushing content-based restrictions on what can be written or shared on the internet. However, “fake news” remains a term with no real meaning and its continued use by elected officials as a means to discredit certain narratives risks free speech, satire, journalism, activism and community organizing, political protest, and other forms of expression.

Government regulation of the quality of news paves the way for censorship. Any law or regulation that makes government officials the arbiters of truth should be met with inherent mistrust. As a possible way forward, our report on dealing with online disinformation in the European Union advocates for three meaningful solutions:

  1. Address the business model of targeted online advertising  through appropriate data protection, privacy and competition laws,
  2. Prevent the misuse of personal data in elections,
  3. Increase media information and literacy.

Moderating massive amounts of content is no easy quest

Given the sheer amount of content produced and shared online on any given day, internet platforms typically rely on automated means to monitor – and in some cases filter – what gets uploaded. The use of such automated  methods can easily lead to general monitoring of online speech, a practice that constitutes a disproportionate interference with the right to privacy and can also have a discriminatory effect toward and negative impact on marginalized or vulnerable groups.

We have profound concerns about the trend around the world to force companies to rely heavily on automated systems to police and manage content, despite the fact that understanding context is absolutely critical for determining whether content should be removed. False positives are not the exception. It is therefore crucial to ensure some degree of meaningful human oversight and intervention.

Criminal sanctions must always be a measure of last resource

Criminal sanctions that restrict individual freedoms are the gravest measure a government can impose when it comes to regulating citizen behavior in a democratic society. These are very serious and should be considered as such by public officials.

Imposing criminal sanctions upon social media executives creates the wrong incentives, and that risks censorship. In the absence of clear rules about what is considered offending content, executives will have little choice but to remove content to protect their own liberty, too often in an overreaching manner. Clear cases of illegal or undesired content will be easier for companies to evaluate, but it is in the uncertain cases where careful considerations are needed to protect the speech of disenfranchised or marginalized communities. The reality is that much of that content will be at risk of removal, potentially for the wrong reasons.

The Privacy Act needs an update – just a different one than you’d think

In an odd twist, the federal Privacy Act is being dragged through the ringer as a possible place to impose penalties and increase real pressure on platforms. The Prime Minister’s argument conflates the protection of personal information and the management of hate speech, so the proposed reform is aimed both at the protection of Australians’ personal data and the prevention of broadcasting “violent offences.”

We fully support an update to the Privacy Act, which at the moment grants little to no data protection for Australian users. In fact, last year we published a guide for lawmakers detailing how to build a rights-respecting data protection framework. Yet Australian lawmakers should refrain from moving away from the objectives of this act. Collapsing these complex issues into one-size-fits-all policy solution is likely to be detrimental for both freedom of expression and privacy going forward.

Conclusion

It can be tempting and seemingly simple to shift the blame to online platforms and threaten them into taking action, including threatening individuals with jail time. Platforms can play a key role in addressing complex societal challenges, including the dissemination of terrorist content online, but it is essential to address real-world issues systematically. There are reports from Tuesday’s Brisbane Summit of the establishment of a task force on the issues at stake that will include companies, but any such endeavor must also include civil society actors. Progress requires inclusive, open dialogues and evidence-based policy solutions geared toward a healthier environment that would reflect Australian democratic values of respect for human rights, whether online or off.