Increasingly, governments are using AI in order to keep tabs on their citizens. The most widely-known example of this is in China, where the government currently uses facial recognition software to monitor and track over a billion of its citizens in their day-to-day lives.
However, such automated surveillance systems introduce several significant problems.
Poorly-trained algorithms can incorrectly identify people as wanted criminals, and even the best struggle to decipher meaning from highly-populated scenes. Additionally, while these tools may have originally been intended to keep people safe, governments are coming to realize how useful they can be when it comes to identifying dissidents or members of a specific minority group.
It’s time to address what the important questions surrounding this technology: specifically, are AI surveillance systems likely to see widespread usage, what information can they discern from someone’s face, and can anything can be done to protect your personal privacy in an age where facial recognition is becoming more and more prevalent?
The global spread of AI surveillance systems
In 2018, Freedom House found (PDF) that 18 countries had purchased automated surveillance software from China. Notably, these included the United Arab Emirates, Venezuela, and Kazakhstan, all of which have been accused of human rights violations in recent years. However, since this report was published, facial recognition systems have been adopted by countries like America, the UK, and Japan, though not always for surveillance purposes.
There are important issues that must be addressed before intelligent surveillance systems can be adopted globally. For starters, are they compatible with existing data protection legislation? The ongoing furor over whether facial recognition breaches GDPR and the fact that a security researcher was able to access a Chinese database containing millions of records, without even entering a password suggests that some countries still have a long way to go on this front.
Ultimately, as this technology is so versatile, it’s unlikely to go away any time soon. Governments will likely continue to upgrade and improve their systems in the coming years, perhaps claiming that they help prevent terrorist attacks by identifying suspicious behavioral patterns long before any incident can take place. Whether the technology will be as effective in practice remains to be seen, however.
What kind of information can automated surveillance systems collect?
It’s often said that if you have nothing to hide, you have nothing to fear but this is an overly simplistic viewpoint. AI surveillance algorithms can do far more than just tell if you were in a specific place at a specific time. Once you’ve been identified, the collected information could hypothetically be cross-referenced with social media profiles, marriage registries, phone books, bank records, and so on. In such situations, once you’ve been seen in public, whoever is monitoring the system could see your entire life story.
There is hope, though, at least in democratic countries. As AI surveillance becomes the norm, we’ll likely see existing data-retention legislation updated to specifically address intelligent monitoring systems. We can expect this to focus on what information can and cannot be collected, as well as how long such data can be held for. Of course, as yet, nothing is stopping countries from simply changing their laws so that they can collect any information they like.
How can people maintain their privacy in the face of AI surveillance?
Citizens are now acutely aware that they are being monitored online thanks to events like the Snowden leaks and the Cambridge Analytica scandal. Perhaps this is why they’re turning to privacy tools like the Tor Browser and Virtual Private Networks (VPNs) in droves, even in countries like China, where the internet is heavily filtered and only a handful of VPNs still work.
Just as people can take action to protect themselves online, there are ways to prevent anyone from harvesting your data in the real world (concealing your face or distorting its features with glasses or a beard, for example). However, there’s no guarantee these methods will continue to work as the technology improves. Further, if other biometric information like the way someone walks or the distance between their pores is taken into account, it could become almost impossible to conceal your identity in public.
Realistically, the best way to protect yourself against intelligent surveillance systems is to stop them before they become a problem. Governments are unlikely to implement such problematic technology in the face of public backlash, so it’s vital that you let your representatives know your opinions on the matter. Remember: facial recognition isn’t just used to identify criminals, it can collect a detailed dossier of information on every single person who passes by the camera, and there’s no telling who will be able to see the resulting data.
About the Author
Paul Bischoff is a tech journalist, privacy advocate and VPN expert. A digital nomad who depends on the internet to make a living, he’s always seeking out the best value and highest quality products and services on the web. He previously worked as the China editor at Tech in Asia and is a regular contributor at Mashable, as well as several blogs for internet startups around the world. You can find him on Twitter at @pabischoff.