The views expressed by contributors are their own and not the view of The Hill

Could a network approach help NATO deal with generative AI?

This illustration photo taken on January 30, 2023 shows a phone screen displaying a statement from the head of security policy at META with a fake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons shown in the background, in Washington, DC. - Chatbots spouting falsehoods, face-swapping apps generating fake porn and cloned voices defrauding companies of millions -- governments are scrambling to regulate AI-powered deepfakes widely feared to be a misinformation super spreader. (Photo by OLIVIER DOULIERY / AFP) (Photo by OLIVIER DOULIERY/AFP via Getty Images)

Last month, President Biden signed a $95 billion national security package that provided funding to Ukraine, Taiwan and Israel. For Ukrainians, this aid is vital relief. Absent U.S. support, Ukraine’s shortages of ammunition would likely yield Russia long-lasting victories. Alongside much-needed military aid, the supplemental package provides reassurance to Ukrainians that the United States — and NATO — remain committed to support their continued defense against Russia’s aggression.

However, NATO’s struggle is not only on the battlefield. Online, the war in Ukraine has led to increases in disinformation campaigns aimed at the U.S. and its NATO allies. And with the advent of generative AI, this problem is bound to get even more severe. Through deepfakes and other AI-generated synthetic media that appear highly realistic, NATO is likely to find itself on the receiving end of a barrage of AI-generated content whose veracity is difficult to ascertain.

How should NATO respond to AI-generated media? Various approaches have been proposed, from new content verification systems to launching new public media literacy campaigns. However, there is one approach that NATO should consider more broadly: a “network approach,” a term coined by Tyler McBrian in a paper for the Innovations for Successful Societies program at Princeton University. Applying a “network approach” to generative AI would start from the premise that international organizations and national governments are unlikely to identify all relevant pieces of AI-generated content alone. Thus, under a network approach, NATO would partner with civil society, non-government organizations (NGOs) and other private actors across the alliance’s member states. In the case of AI-generated content, these civic and private partners would help identify AI-generated synthetic media that purports to be authentic and notify NATO, who would then coordinate a response with national governments within member states.

This network approach could offer two major strengths. First, it would help NATO more effectively address the challenge of generative AI, given its limited resources. Given NATO’s growing set of commitments, from Ukraine aid to building new partnerships with Argentina, South Korea, Japan and a variety of other tasks, the organization could benefit from ways to use its limited resources more efficiently to achieve all its goals. Through a network approach, NATO could partially delegate some of its efforts to identify malicious AI-generated content to civic organizations in its member states, freeing up important staffing and economic resources for other tasks in the cyber theatre.

Second, and more importantly, it would serve as a form of much-needed public diplomacy. Through NATO’s engagement with local civil society organizations and private groups to verify AI-generated content, the alliance would be able to build working relationships that would foster trust, legitimacy and credibility within these communities. As some in the transatlantic alliance are increasingly skeptical toward NATO, public diplomacy efforts could help ease negative perceptions of NATO among critics and shore up support for the alliance within member states.

Furthermore, there is already empirical evidence of this solution’s efficacy. The first country to pioneer this “network approach” was a NATO member: Estonia. Given reports of misinformation in the United States and the United Kingdom, the Estonian government envisioned the first network approach to disinformation to protect the integrity of the country’s elections. The government formed an interagency task force in cooperation with NGOs, firms and other groups who helped the government identify false information and adapt its public messaging to counter it. Thanks to this network approach, Estonia ranked among the top European countries for its population’s media literacy and disinformation strategy in 2021 (Media Literacy Index by the Bulgarian Open Society Institute).

Though it offers benefits, the network approach does also face significant limitations. First, as generative AI models improve, it may soon be difficult for local partners to verify if content is AI-generated — indeed, the efficacy of detectors today is far from perfect. This limitation would be difficult to overcome unless AI-detection technology improves alongside with AI content generation. This means that the use of this network approach may be more short-term and not as viable in the future. To move around this problem, actors in NATO member states should invest more in efforts to improve content provenance and verification systems.

Second, there is likely substantial heterogeneity in the efficacy of different partners under this approach. Indeed, some civil society organizations are likely to be more familiar with identifying AI-generated content, while others may be less familiar, leading to the overrepresentation of input from certain partners versus others and biasing the AI-generated content that NATO sees and chooses to address. Resolving this challenge would require NATO to selectively choose its partner organizations to be a non-biased and representative sample across the alliance. This constraint may also require the network approach to be used sparingly, only in limited cases where AI-generated content is particularly persistent and widespread such that multiple partners detect it.

Third, more evidence from new countries is needed to verify this approach’s efficacy. Estonia is a relatively small country with high levels of public trust in government and heavy investment in digital literacy. Therefore, it is not clear how effective it would be in countries with lower levels of trust in state institutions or less investment in media literacy. For example, in countries with lower levels of trust in government, civil society organizations may be less willing to partner with government actors, limiting the ability to establish the partnerships that are central to this approach’s success.

Given these limitations, NATO may benefit from testing the network approach to elucidate its benefits prior to scaling it. First, NATO could launch pilot programs in member countries with high levels of digital literacy and public trust, working with local civil society organizations and firms to flag AI-generated content in these regions. Then, through these pilot programs, NATO could assess the efficacy of the approach and, if deemed successful, scale it up to counter AI-generated misinformation in Ukraine and Western Europe. Together, this solution could help better prepare us for our AI future.

Sergio Imparato is a lecturer in government and assistant director of Undergraduate Studies in the Government Department at Harvard University. Sarosh Nagar is a Marshall Scholar and researcher at Harvard. His work has previously been published by The United Nations, The Hill, JAMA and Nature Biotechnology.