The views expressed by contributors are their own and not the view of The Hill

For AI national security, improve but don’t isolate cyber infrastructure

US President Joe Biden chat with guest during an event regarding government regulations on artificial intelligence in the East Room of the White House on Monday October 30, 2023. (Photo by Demetrius Freeman/The Washington Post via Getty Images)
US President Joe Biden chat with guest during an event regarding government regulations on artificial intelligence in the East Room of the White House on Monday October 30, 2023. (Photo by Demetrius Freeman/The Washington Post via Getty Images)

Perhaps the most powerful undercurrent to Washington’s drive to adjust to AI are fears of China’s powerful AI sector. 

Increasingly, Congress’ energy is directed towards AI export controls — specifically through the House’s bipartisan ENFORCE Act. Introduced in May, it would grant the Department of Commerce authority to control exports of AI system software, on top of preexisting control over AI hardware.

National security fears are spurring the conversation, with cybersecurity receiving specific attention. Foreign Affairs Committee ranking member Raja Krishnamoorthi said: “Without action, there is a grave possibility that a future cyberattack on the American homeland could be enabled by AI technologies.”

While I share these concerns, the proposal might inadvertently bind domestic and international security workers and undermine comprehensive cyber defense efforts.

Looking under the legislation’s hood, first consider what’s covered. The president would carry new authority to restrict “automated vulnerability discovery” systems — that is, “bug-hunting” machines that expose tech vulnerabilities. While the legislation focuses on “offensive” systems, vulnerability detection lies at the heart of defensive efforts, too. Any restrictions on this tech will necessarily cut both ways.

Already, this should perk the ears of the security conscious. Riskier still would be the new presidential authority to control not just AI tech itself, but also “other activities that may support … maintenance, repair, overhaul, or refurbishing of” AI tech the president deems critical to national security. 

Put simply, this bill would restrict not just the export of AI cyber tools, but the “export” of critical cybersecurity services.

Why are these provisions problematic?

First, restricting cybersecurity tools and services nurtures global insecurities. Cybersecurity is rarely a zero-sum game and the cyber environment transcends borders. The vast majority of system code, irrespective of national origin, draws deeply from a common well of free and open-source software. 

So, while it may be tempting to assume that walling off defensive cyber technologies and services might give the United States an AI upper hand, realistically, it risks degrading the overall quality of the global defensive effort.

Unfortunately, untreated insecurities left festering abroad can quickly nurture crises at home. As the White House’s National Cybersecurity Strategy points out, “An attack on one organization, sector or state can rapidly spill over to other sectors and regions.”

The world learned this bitter lesson in 2017, when the Russian government launched NotPetya, a uniquely vicious piece of malware, in Ukraine. It quickly snowballed into a systemic cyber disaster. 

Taking advantage of shared global insecurities and tight system integration, NotPetya traveled far beyond its target, disabling American hospitals and stalling a shocking one-fifth of global shipping capacity. The worm even turned on its creator, infecting Russia’s state oil company.

The lesson of NotPetya is that a rising defensive tide lifts all boats. Realistically, border restrictions on defensive tech weaken the international flank of our security perimeter. Without the latest vulnerability detection systems or agile AI system maintenance, foreign cybersecurity could degrade, inviting more NotPetyas.

Second, cybersecurity hinges on timeliness and agility. When a vulnerability is discovered, the clock starts ticking to attack it, demanding immediate defensive action.

What doesn’t begin immediately is the export-license process put in place by border controls. Rather than acting to patch security holes, vendors and service companies may find themselves digging through Commerce Department documentation, calling lawyers, and, if export licenses are indeed required, waiting out processing delays.

Given the newness of highly capable AI systems, it’s hard to calculate the exact risks invited by unpatched systems. That said, with national security concerns driving this export control conversation, we must not ignore the potential cyber harms that service delays might sew.

It’s important to note that while the security benefits of export controls are questionable, the policy impulse is not.

Since the release of ChatGPT, the volume and nature of malicious cyberactivity has indeed transformed. According to the security firm SlashNext, language model automation has propelled a ground-shaking 1,265 percent increase in malicious “spear-phishing” emails. Security forensicists have even found early evidence of AI-generated malware. While not yet a crisis, policymakers have decisions to make.

Rather than rolling the dice on export controls, Congress should focus dollars and effort on shoring up our cyber infrastructure and capacity. 

The National Vulnerability Database — the federal catalog of known cyber risks — is experiencing staffing-related backlogs. The National Institute of Standards and Technology — our standards development agency — is operating on a skeleton crew while facing an ever-greater pile of congressional and executive demands. Basic investments in these services may sound boring, but the return is unlikely to be matched by other AI security bets.

The power of AI and its many security unknowns demand serious attention, but export controls are an uncertain gamble. By focusing instead on cyber institutional capacity, Congress can set a more certain table for success.

Matthew Mittelsteadt is a technologist and research fellow with the Mercatus Center at George Mason University.

Tags Artificial intelligence industry in China artificial intelligence regulation Cybersecurity regulation Politics of the United States Raja Krishnamoorthi

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.