The views expressed by contributors are their own and not the view of The Hill

Building AI with democratic values starts with defining our own

AP Photo/Peter Morgan, File
FILE – A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Jan. 5, 2023. A popular online chatbot powered by artificial intelligence is proving to be adept at creating disinformation and propaganda. When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or…

Policymakers describe their visions for artificial intelligence with statements of values. Secretary of State Antony Blinken has argued that liberal democratic countries should develop and govern AI in a way that “upholds our democratic values” and combats “the horrors of techno-authoritarianism.” Congressional Republicans have urged the development of AI in a manner “consistent with democratic values.” 

Initial attempts to realize these visions have defined guiding principles for AI systems that support democratic values. These principles, such as accountability, robustness, fairness and beneficence, have enjoyed broad consensus despite the very different constituencies and values of their creators. 

But despite being sold as supporting “democratic values,” these exact same principles are centered in AI policy documents of non-democratic states such as China

This discrepancy between the rhetoric of conflict used to describe “democratic” and “authoritarian” visions for AI and the broad agreement on high-level statements of principles points to three steps policymakers must take to develop and govern AI in a way that truly supports democratic values. 

First, calls for developing AI with democratic values must engage with the many different conceptions of what “democracy” entails. If policymakers mean that AI should strengthen electoral democracy, they could start at home by investing in, for instance, the use of algorithmic tools to combat gerrymandering. If policymakers mean that AI should respect fundamental rights, they should enshrine protections in law — and not turn a blind eye to questionable applications (such as surveillance technology) developed by domestic businesses. If policymakers mean that AI should help build a more just society, they should ensure that citizens do not need to become AI experts to have a say in how technology is used. 

Without more precise definitions, lofty political statements about democratic values in AI too often take a back seat to narrower considerations of economic, political and security competition. AI is often seen as being at the core of economic growth and national security, creating incentives to overlook holistic values in favor of strengthening domestic industries. The use of AI to mediate access to information, such as on social media, positions AI as a central facet of political competition

Unfortunately, as rhetoric and the perceived importance of winning these economic, security and political competitions escalate, values-questionable uses of AI become increasingly easy to justify. In the process, imprecisely defined democratic values for AI can be co-opted and corrupted, or become little more than cover for hollow geopolitical interests. 

Second, consensus AI principles are so flexible that they can accommodate broadly-opposed visions for AI, making them unhelpful in communicating or enforcing democratic values. Take the principle that AI systems should be able to explain their decision-making processes in human-understandable ways. This principle is commonly said to uphold a “democratic” vision of AI. But these explanations can be conceptualized and created in many ways, each of which confers benefits and power to very different groups. An explanation provided to an end user within a legal context that allows them to hold developers accountable for harm, for example, can empower people impacted by AI systems. However, most explanations are in fact produced and consumed internally by AI companies, positioning developers as judge and jury in deciding how (and whether) to remedy the problems that explanations identify. To uphold democratic values — promoting, for instance, equal access and public participation in technology governance — policymakers must define a much more prescriptive vision for how principles like explainability should be implemented. 

In other settings, democratic values are imbued not in consensus principles themselves, but in how they are traded against each other. Take neural implants, devices that record brain activity. Applying AI techniques to reams of this data might speed the discovery of new therapeutics for neurodegenerative diseases. But research subjects whose brain data aids these discoveries face severe privacy risks if future technology advances allow them to be identified from nominally anonymized data — and these research subjects might not even benefit from access to the initially cost-prohibitive resulting therapies. In situations like these, statements of principles alone are insufficient to ensure that AI upholds democratic values. Rather, policymakers must define processes for making the difficult decisions that arise when principles come into tension. 

Finally, effectively implementing consensus AI principles is far from a straightforward technical process. Instead, it requires the difficult work of building strong public and trusted institutions. 

Take the oft-stated principle that AI systems should be “accountable” to their users. Even in the presence of legal structures enabling redress from automated systems, accountability is not meaningful if individuals must become AI experts to protect their rights. Rather, accountability requires a strong and technically-informed civil society to advocate for the public. One important component is advocacy organizations with the technical capacity to examine the use of automated systems by powerful businesses and government bodies and hold them to account. Independent media also plays an important role in effecting accountability by publicizing un-democratic trends. For example, it would be difficult for an impacted individual to identify and challenge the subtle biases in criminal sentencing algorithms, but ProPublica’s 2016 investigation drew broad policy and research attention to algorithmic bias. 

Strong, trusted and flexible governance institutions are especially important as policymakers grapple with complex technical issues. The difficulty of turning consensus AI principles like “safety” and “robustness” into concrete policy places legislators between a rock and a hard place. On the one hand, vaguely-worded legislation designed to keep up with advances in technology creates business uncertainty and high compliance costs, preventing the public from accessing the full benefits of new technologies. But narrowly-targeted rules designed with these concerns in mind will quickly become outmoded as technology evolves. 

One solution to this dilemma is to equip regulatory bodies and civil society watchdogs with broad mandates and technical capacity. But surveys show that the low public trust in governments and other institutions extends to AI, and recruiting and retaining technically sophisticated watchdogs is more expensive than taxpayers’ representatives typically abide. Implementing a democratic vision for AI requires that policymakers invest in institutions and that these institutions perform the slow and difficult work of advocating for the public, building strong accountability mechanisms and developing new ways to solicit public opinion on highly technical subjects. 

The challenges to meaningfully defining and implementing a democratic vision for AI are significant, requiring financial, technical and political capital. Policymakers must make real investments to address them if “democratic values” are meant to be more than the brand name for an economic alliance. 

Matt O’Shaughnessy is a visiting fellow in the Technology & International Affairs Program at the Carnegie Endowment for International Peace. 

Tags A.I. Artificial Intelligence Antony Blinken Artificial intelligence democratic values Ethics of artificial intelligence Politics of the United States Regulation of artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.