Governments’ responses to today’s pandemic are laying a foundation for tomorrow’s surveillance state.
New smartphone apps are collecting biometric and geolocation data to automate contact tracing, enforce quarantines, and determine individuals’ health status. Government agencies are harvesting more user data from service providers without oversight or safeguards against abuse. Police and corporations are accelerating the rollout of technologies to monitor people in public, including facial recognition, thermal scanning, and predictive tools.
History has shown that powers acquired during an emergency often outlive the original threat. And governments in democracies as well as authoritarian states are now exploiting the health crisis to digitize, collect, and analyze our most intimate data, thus threatening permanent harm to our privacy.
In at least 54 of the 65 countries we tracked as part of “Freedom on the Net 2020,” smartphone apps have been deployed for contact tracing or ensuring quarantine compliance. While some developers have created new products centered on privacy — such as the international consortium behind the DP-3T protocol or an application interface jointly developed by Apple and Google — many apps send data directly to government servers and are closed source, which does not allow for third-party reviews or security audits.
In many countries, cybersecurity standards have intentionally been made weak to allow broader data collection by state authorities. The closed-source app Quarantine Watch, developed with support from the Indian state government of Karnataka, requires users to send pictures of themselves accompanied by metadata on their geolocation to prove that they are complying with mandatory isolation. State officials have joked that “a selfie an hour will keep the police away.”
Russia’s Social Monitoring app accesses GPS data, call records, and other information and requests random selfies from users to enforce quarantine orders and other restrictions on movement. In just over a month, authorities imposed some 54,000 fines. Those tagged with the sometimes erroneous and arbitrary penalties included the wrong identical twin, a bedridden professor, and people sleeping who received selfie requests in the middle of the night.
In at least 30 countries, governments are using the pandemic to engage in mass surveillance in direct partnership with telecommunications providers or military intelligence agencies. To support “track and trace” efforts, Pakistan’s government has retooled a secretive antiterrorism system developed by an intelligence agency itself implicated in flagrant human rights abuses. There are also reports of intelligence agents tapping the phones of hospital patients to determine whether their friends and family talk of having symptoms themselves.
No country has taken a more comprehensive and draconian approach to COVID-19 surveillance than China, whose monitoring system was already the most sophisticated and intrusive in the world. Opaque mobile apps, dangerously broad data sharing agreements, and upgraded video and biometric systems have been rolled out. Artificial intelligence companies like Hanwang allege they can now identify people even if they are wearing a mask. Across 10 cities, facial recognition cameras have been upgraded with thermal detection technology, which claim to be able to scan crowds of people and identify who has a fever.
Contact tracing is vital to managing a pandemic. But digital monitoring programs, which sweep up greater amounts of identifiable information than individual testing, are being implemented hastily, often outside of the rule of law and without the necessary safeguards to protect basic rights.
Data collected from smartphone apps or by state agencies — one’s location, names, and contact lists — can be paired with existing public and corporate datasets to reveal intimate details of people’s lives, including their political leanings, sexual orientation, religious beliefs, and whether they receive specialized forms of health care. The portraits that emerge can have serious repercussions, especially in countries where one’s opinions can lead to harassment, arrest, and even targeted violence.
These public health surveillance systems will be difficult, if not impossible, to decommission. As with national security matters, state agencies will always argue that they need more data to protect the country. There will also be greater demand for health-related information from insurers, credit agencies, and other industries that could profit from it.
The public should be deeply skeptical of promises from private companies and government authorities of purely technological solutions to problems that in fact require concrete economic, societal, or political action to address.
To blunt this expansion of mass surveillance, we need far greater public deliberation as well as independent oversight. At the very least, authorities should have to prove that a proposed measure is necessary and fit to purpose.
To emerge from the pandemic with fundamental rights intact, narrowly tailored and transparent rules are needed to minimize what data are collected, who collects them, and how they can be used. Without these safeguards, the marginal benefits of pandemic surveillance are outweighed by the threat it poses to democratic values and human rights. The future of privacy depends on what we do next.
Adrian Shahabaz is Freedom House’s director of technology and democracy. Allie Funk is a senior research analyst.