In the AI era, privacy and democracy are in peril
Lawmakers should not let the impeachment inquiry distract them from addressing an equally pressing issue that threatens U.S. democracy. The unchecked use of Artificial Intelligence (AI) by the major internet platforms is invading our privacy, putting our democracy at risk.
Google reads everyone’s email, analyzes our searches, where we surf, where we go in the physical world, who our friends are, who we have spoken to and much more. The internet giants have appropriated public and private data to create “prediction products” that can forecast individual behavior and manipulate them without their awareness (or consent). Pure profit maximization has led to a capitalistic form of surveillance that is arguably even worse than the Chinese model, which is ostensibly about maximizing the benefits to all of society. These two extreme objective functions – unconstrained profit maximization and state control – lead to the same result: The ceding of free will to AI algorithms that control us overtly or covertly. What does democracy mean if there is no free will?
I recommend regulating the use of personal data for prediction products. I also propose classifying certain platforms as “digital utilities” that aim to maximize public benefit and spur economic growth, much like the interstate highway system and the “Information superhighway” have done for physical and electronic commerce.
U.S. regulators should look to Europe and India’s models as two low-risk approaches to data protection. In a Washington Post article in September 2018, I showed that four major models of data use have emerged globally, with the U.S. and Chinese models at extreme ends, and the European and Indian models emphasizing individual data protection.
Europe’s General Data Protection Regulation (GDPR) model requires “a data controller to hold and process only the data that is absolutely necessary for the transaction.” Such a move would limit the considerable “behavioral surplus” that such companies use for their prediction products, and better protect individuals from explicit manipulation.
But we can and should go further by considering two strengths of the Indian approach. The first is individual empowerment through “data fiduciaries,” who are similar to financial fiduciaries in that their primary allegiance is to the individual and require explicit consent for sharing personal data.
Second, India views certain types of infrastructure as “digital utilities,” on top of which other utility or commercial platforms can operate. For example, the Aadhar platform, the world’s largest biometric identification system, provides real-time authentication, confirming, “Are you who you say you are?” Payment platforms are also a utility, and make use of Aadhar’s real-time authentication capability. Platforms form a “stack,” with upper layers making use of the high-volume, low-cost utility layers at the bottom.
In an effort to create “digital utilities,” U.S. regulators need to consider two properties of internet platforms. First, these platforms do not conform to standard industry boundaries, making antitrust regulation difficult. Which sector does Amazon belong to? Google? Facebook?
Second, internet platforms exhibit a “winner take all” outcome due to scale and network effects, which makes it difficult to create competition artificially. There is little room for another search engine, another social media platform, another online retailer or another micro-blogging platform. If we accept the dominant nature of such monopolies, we might regulate them accordingly, in the same way we regulate water and electricity. A federal agency that regulates “digital utilities” must be oriented to the digital future, which will include digital sensor networks to support driverless cars and other future innovations.
In 2017 I recommended that social media platforms be regulated to curb the misuse of Facebook that was fast becoming apparent. I proposed Know Your Customer (KYC) regulation, similar to its use in the financial arena, to ensure that the platforms know who is paying them. Verifying identity would go a long way to reducing fraud, such as enticing senior citizens to make precious metals investments using fake entities with names like “Fox News Insider Reports” and “US Retirement Bureau.”
But KYC does not address the unconstrained manner in which the internet giants are gathering, linking and selling public and private data to create products that can predict individual desires and behavior. Although it is not possible to prove conclusively that nefarious parties influenced the 2016 U.S. presidential election outcome, it would be a mistake to wait for irrefutable proof of harm to act. If machines can predict how to alter consumer behavior in a specific way, they are in charge, meaning their owners are in charge.
In a September 2014 op-ed piece in Britain’s Independent newspaper, the famed theoretical physicist Stephen Hawking provided a stark warning on the future of Artificial Intelligence, noting, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” He warned that dismissing hyper-intelligent machines “as mere science fiction would be a mistake, and potentially our worst mistake ever.”
We are at a crossroads, where humans risk losing complete control of their liberty to AI. We can protect our privacy and our democracy through judicious data protection policy, but we must not delay.
Vasant Dhar is a professor at New York University’s Stern School of Business and the director of the PhD program at the Center for Data Science.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.