We would like to thank Fabien Bénétou (@utopiah) for sharing his technical expertise during the research for this blog.
Why are we talking about augmented reality?
Since the hype surrounding Pokémon Go subsided, augmented reality (AR) has not been making many headlines. However, two recent announcements brought AR back into the news cycle. Nintendo announced the launch of a new Mario Kart Live: Home Circuit game developed by Velan Studios, and Facebook announced on September 16th the launch of Project Aria, its attempt to develop its own augmented reality glasses.
In this post, we explain why AR, and particularly its incorporation into headsets or glasses, presents serious concerns for our digital rights. We look at how the technology has developed over time, share our letters to Nintendo and Velan Studios asking how the companies plan to safeguard our rights, and explore what we can do to make sure those rights are protected.
What is AR?
Augmented reality is a broad term for technology that allows us to see our environment with an overlaid digital layer. The objects you see in the real world are enhanced by computer-generated perceptual information combining the real world with a virtual one, allowing real-time interaction and a 3D registration of virtual and real objects. This technology can be deployed using different gadgets, like your smartphone (think of photo filters on Snapchat or Instagram) or with wearables such as glasses.
While the majority of current uses seem relatively banal, the technology is already being used in more serious domains. For example, the U.S. Army uses it to digitally enhance training missions for soldiers, Chinese police use it to identify suspects, and neurosurgeons are experimenting with using an AR projection of a 3D brain to aid them in surgeries. To delve a bit deeper into how AR works, let’s look at how Nintendo and Facebook are planning to use it.
A short history of AR, video games, and digital rights
Nintendo’s new AR game, Mario Kart Live: Home Circuit, is not the company’s first foray into augmented reality. In 2016 Nintendo collaborated with Google spin-off Niantic Labs and the Pokémon Company to produce Pokémon Go, generally regarded as the app that launched AR gaming into the mainstream.
Looking at the surrounding world through your phone’s screen, you could see your surroundings “augmented” with little collectible monsters that you could capture, and you were encouraged to explore your surroundings in search of rare catches — which led to everything from trespassing to muggings and the discovery of a dead body.
Pokémon Go drew criticism from privacy advocates because of the vast amount of data it collected about players as the game tracked their movements, and their concerns were heightened by the fact that Niantic’s CEO was the engineer behind the unauthorized collection of WiFi data in the Google Maps “Wi-Spy” scandal. These worries prompted the Electronic Privacy Information Center to send a letter to the U.S. Federal Trade Commission noting that “[c]ollecting and compiling detailed maps of consumers’ location history causes substantial injury to consumers by posing serious safety and privacy risks of abusive data practices and identity theft.”
With its latest move into AR gaming, Nintendo will face a fresh wave of privacy and data protection concerns. For Mario Kart Live: Home Circuit to function, you must drive a remote control car around your living room (or other area) and have it pass through a series of gates. The system maps the area to create the virtual racetrack. You can then look at the screen of your Nintendo Switch console and see your own living room transformed into a racetrack with AR obstacles and competitors.
While this certainly sounds fun, all of us should worry about allowing a device to create such a detailed map of the inside of our homes. In the last few years, smart home appliances such as vacuum cleaners and home security systems have sparked legitimate concerns about privacy, security, and data protection, and having AR devices map our most private spaces only adds a further dimension. The Electronic Frontier Foundation is already raising red flags about the use of virtual reality (VR) headsets to create detailed depth maps of these spaces. For one example, Facebook’s “shared spaces” feature for its Oculus headset would require the creation of such a map, opening our homes as a space for the government, law enforcement, or hackers to gather information about us — unless the information is properly encrypted and protected.
Nintendo’s new Mario Kart device may also collect a lot of potentially sensitive data about players – including children – as they pilot Mario and Luigi (and maybe in the future, characters like Princess Peach) around their sitting rooms or kitchens. So far, the company has shared no information about precisely what data are collected, whether the information is stored locally or sent to a cloud, whether it is encrypted, who will have access to it, or whether the information — or the insights and inferences made from it — will be shared with third parties. That is why we’ve just sent a letter to Nintendo and Velan Studios asking for more information.
Return of the “glassholes”?
While Nintendo’s latest AR venture raises clear concerns, the situation is even more serious when it comes to AR glasses like those being developed by Facebook as part of Project Aria. It’s not the first time we’ve seen a product like this: in 2014, Google made headlines for all the wrong reasons with Google Glass, the AR glasses that gave us the term “glasshole” and turned out to be a commercial flop (while still selling for specialized industry purposes).
Those who bought Google Glass came to be known as “glassholes” because the people around them — innocent bystanders — felt the glasses were being used to violate their privacy. In some cases, the negative reaction was so extreme, people actually physically assaulted Glass users, and Google published a set of recommendations to help users avoid getting attacked. At the time, opponents of Glass were characterized by some as “anti-tech,”’ but they were voicing reasonable concerns about the way these devices could be used, especially considering the fact they could become yet another part of Google’s vast data collection network.
The demise of Google Glass as a consumer product did not stop companies developing AR headsets and glasses, and Google and Facebook have since bought some of these companies, namely North and CTRL-Labs. There are a number of AR headsets and glasses currently on the market, ranging from extremely pricey and bulky headsets, such as Microsoft’s HoloLens 2 and Magic Leap’s One headset, to simpler models that look like normal spectacles, like Vuzix Blade and Snapchat’s Spectacles.
There are also rumors that Apple will launch its own AR glasses in 2021, and Amazon – which is already working in AR – is launching Echo Frames, a pair of smart glasses which incorporates the Alexa virtual assistant. While many of these products are aimed at business and industry, some are marketed at people for specific activities such as cyclists and drone-hobbyists, and the new generation of products now in development seems to be aimed at the general public.
That’s where Facebook’s Project Aria fits in. The company plans to develop a sleek, consumer-friendly pair of AR spectacles. If we see devices like this become more powerful, less bulky — like a normal pair of glasses — and potentially hugely popular, the risks for our rights will likewise grow exponentially. According to some of the research by experts in the field, broadscale use of AR in everyday life could be as disruptive as the internet itself.
What AR means for our rights in 2020 and beyond
Back in 2016 when Google Glass was launched, we expressed our concerns regarding its impact on privacy. We asked: How would consent work in situations where the people wearing these glasses in public capture the biometric data of bystanders? Who owns the data that are collected through use of these devices? How and where is that information stored, and who would get access to it?
Things have changed a lot since then, on both the technological and legislative fronts. In many countries, we have seen standards for data protection improve after the European Union’s General Data Protection Regulation (GDPR) came into force and other similar laws were passed, such as the California Consumer Privacy Act (CCPA) in the U.S. These legal tools give us new ways to ensure that our rights are respected, and they have a significant impact on how AR glasses and headsets would have to function in order to comply with the law.
Simultaneously, we are seeing new advances in AR, such as more precise real-time mapping from depth sensors, that have opened up new risks that did not exist when Google Glass was released.
Given that context, what should we be worried about? There are more general concerns with AR technology, and more specific concerns that arise when you use devices such as specialized headsets, which add extra layers of complexity to the problem.
Why AR threatens our rights
For advanced AR technology to work, it has to create a 3D model of the real world, and this can mean gathering huge amounts of information about us and our surroundings. This allows the system to place overlay objects on the physical world in a convincing manner. For example, to make an animated object such as a Pokémon appear to be standing on your table, the AR system needs to recognize the dimensions and depth of the table. For photo filters to work on Snapchat and Instagram, a detailed 3D map of your face needs to be created onto which the filter will be applied.
The important thing to consider is what happens with all this data. Is it stored and processed locally on the device, or sent “to the cloud”? If the information is sent to a cloud, will it be encrypted? Will this data be shared with third parties or used to make inferences about us, which could then be used to target ads? Are there ways for us to exercise our data protection rights? Do we get transparency into how our data will be handled or processed? Are we given an opportunity to exert valid consent?
The questions don’t stop there. It’s possible for uses of AR to harm rights beyond privacy and data protection. There will be content governance issues in AR spaces, with implications for free expression. For instance, imagine mixing deepfakes with AR, something that Walter Pasquarelli calls Synthetic Reality. What happens when we are able to virtually place objects — including offensive, harmful, and illegal objects or slogans — on top of real world locations?
Will far-right groups use AR to label the houses of immigrants? Will school bullies use AR to place offensive objects in the “virtual garden” of a victim’s house? These hypothetical scenarios are just a hint of the challenges AR is likely to pose. Now is the time for companies, regulators, and the digital rights community to map out the risks and build safeguards for our rights.
Why AR glasses are even more troubling
When AR technology is incorporated into wearable glasses, the risks to our rights increase — especially when such devices are used in public spaces. There are models of AR glasses in development that would not only allow you to record your everyday life by taking pictures and videos using the glasses, but also to livestream what you see, and even to perform live analytics such as facial recognition on the footage you capture. And if the glasses integrate third-party apps, this could add new ways to violate the rights of bystanders.
It’s arguably possible to do all of this with a smartphone, but there are more serious risks when you use a pair of AR glasses. Imagine you are sitting in the park with your family, and a stranger walks toward you holding his phone in front of his face. Since he is clearly filming you and your family, you are likely to feel extremely uncomfortable about this, and you may even confront the stranger and ask him to stop. But you may not even notice a stranger doing the very same thing using AR glasses, so any social friction he may encounter is removed.
If AR glasses become ubiquitous in public spaces, anyone in your vicinity could easily record you in this way, and even though some models being developed have a light that turns on when recording, that feature could be disabled or masked. These glasses could leverage facial recognition software to identify people in your family, or use “deepfake”-style software to virtually “remove” your clothes (and yes, developers have already created just such an app to do this with photos).
We’re already subject to pervasive surveillance in our daily lives. It’s hard to escape from the CCTV cameras in public places, and companies and governments track our actions online. With the increasing use of highly intrusive facial recognition technology, there is a growing movement to push back, including bans on its use in several cities in the U.S. Yet while the public may be watching out for these threats from “Big Brother,” they may overlook the capacity of Facebook or Apple’s proposed AR glasses to make everyday citizens “Little Brothers” without even realizing it.
The bottom line is that if we see widespread and unfettered adoption of AR glasses, it could realize a decentralized surveillance panopticon that would be extremely difficult to regulate. The laws that stop people from spying on one another are patchy and confusing, and the average person is not likely to know what behavior is illegal. That’s a problem, because instead of dealing with company misbehavior, we’re dealing with how ordinary people use a product.
The big picture: AR could (further) privatize public spaces
We often hear that social media platforms have become a de facto public sphere, and how this creates huge problems for the exercise of rights such as freedom of expression. With the proliferation of AR glasses in public spaces, we risk stumbling into a situation where these same companies control an unregulated “augmented sphere” imposed on our public spaces.
Here’s how that might work: Facebook augments the center of your city or town using proprietary features. These features require that you have an account — and to give up all your personal data — to participate. You value your privacy, but you have a strong incentive to join because all your friends can see what you cannot see.
A number of companies are already developing their own digital version of the world – often called “Mirrorworlds” or “AR clouds.” To limit the amount of processing your device would have to do when you enter a public space, the companies would store maps of different places in an AR cloud that your device would access. Companies would compete to serve you the most detailed and up-to-date maps to provide you with the fastest and most advanced AR experience.
Facebook has already acknowledged that it will require crowdsourced user data to build its AR cloud, which it calls LiveMaps. We should be asking whether we are comfortable having Facebook and other companies turn user-generated data about public spaces into a commodity. It also raises the prospect of inequality reinforced by geography, where hip and wealthy areas have better AR maps than under-resourced neighborhoods, cities, or towns, because people in these wealthy areas are more highly valued “customers.”
If the data Facebook uses are crowdsourced, shouldn’t the information be a public resource? Who should control the AR map of a city — the city itself, or a private company trying to make a profit? What happens to the people who are excluded from these digital worlds? These are just a few of the questions companies should answer before we grant them the power to further privatize and profit from our public spaces.
What we can do now to mitigate the risks and build a better AR future
It’s not all gloom and doom. Just like with other major technological developments, it’s possible that further developing AR will open new opportunities to enable the enjoyment of human rights, such as the rights to free expression and assembly.
In 2011, when the Occupy Wall Street protests took place, Mark Skwarek collected photographs of people participating both at demonstrations in Zuccotti Park and in remote locations. Using a mobile app called Layar, he created a virtual protest by placing the images and animations in front of the New York Stock Exchange.
More recently, when Glenn Cantave lost a battle over toppling a statue of Christopher Columbus due to its status as a symbol of oppression and slavery, he formed a group called Movers and Shakers NYC. This coalition uses augmented reality to execute direct action and advocacy campaigns for marginalized communities who are fighting systemic oppression.
When the internet became part of everyday life in the late 90’s, many of us thought that the digital space would enable a more democratic, equal, and free society. Almost 30 years later, we are still trying to figure out how to govern this space to ensure we can realize that vision.
Today, as we marvel at the innovation and creative uses of AR, we have the opportunity to move forward with our eyes open to the risks, and with the intention of building a collaborative and participatory regulatory framework for the technology that can help mitigate those risks and serve humankind. Some of this work has already begun, such as through projects like the XRSI Privacy Framework, developed with privacy in mind. But there remains a great deal of work to do to ensure that AR will enable digital rights instead of obstructing them. We hope you’re with us in that endeavor.