AI ‘wild west’ raises national security concerns 

Photo by Leon Neal/Getty Images
In this photo illustration, the welcome screen for the OpenAI “ChatGPT” app is displayed on a laptop screen on February 03, 2023 in London. OpenAI, whose online chatbot ChatGPT made waves when it debuted in December, announced that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States.


The rise of generative artificial intelligence (AI) tools like the increasingly popular ChatGPT and rival chatbots is leading to concerns about potential national security risks and implications. 

The tools are being rolled out to date with guardrails largely set by the tech industry. That raises concerns about how those tools may be abused by malicious actors or simply go awry through commercial use, according to experts. 

“It’s the Wild West right now. And we are way early on in the process of understanding consequences,” said Roger Cressey, a former National Security Council member in the Clinton and Bush administrations.

“At this point, the most important thing we can do is not accept anything that is coming to market at face value, and instead ask a bunch of basic questions,” he added. 

ChatGPT burst onto the scene in November, and its popularity has skyrocketed since.

In the following months, Microsoft, which invested billions into the OpenAI tool, ramped up use of the generative AI technology in its products. Microsoft announced new features Monday powered by the technology behind ChatGPT that will be introduced in other tools, such as for customer service departments to build virtual agents. 

The latest announcement follows Microsoft’s rollout, limited to a preview audience for now, of a new Bing search engine that incorporates ChatGPT into search. Google announced a rival generative AI chatbot, Bard, the same week as Microsoft’s Bing announcement. 

Reliance, dependency on AI could be risky 

Paul Rosenzweig, former deputy assistant secretary for policy at the Department of Homeland Security between 2005 and 2009, said a dependency on generative AI technology poses major risks across sectors. 

“People are going to increasingly come to us and rely on generative AI for everything from intelligence analysis, to targeting, to planning supply chain resupply. Anything that human beings do, AI could, at least in theory, do as well if not better,” Rosenzweig said. 

On one hand, that offers benefits by increasing efficiency or reducing costs. But on the other hand, the more dependent humans are on technology the more at risk they are when a system goes down, he said. 

Rosenzweig said the more blatant risk is of potential hackers breaking into systems and making the technology perform poorly or give wrong information, while the other “more subtle” risk is of “monoculture dependence.” 

“At least in critical systems, the greatest risk is not just having the AI or having anything, but having only one type of it — one operating system, one AI system, one communication system for email, one cloud server, you name it. One of anything in national security is bad,” he said. 

In certain sectors that deal with sensitive data, such as financial or health sectors, the use of generative AI could be particularly concerning, experts said. 

Generative AI amplifies data privacy risks

“We know almost nothing about how the data collected by generative AI is being stored and processed,” Rosenzweig said. “We know of a certainty that it is being used to improve the product.”

Again, that offers a benefit, he said, by making the AI smarter, but nonetheless there’s a “whole boatload of data security concerns.” 

Data privacy concerns aren’t a new challenge from generative AI, but the quickly evolving tech is underscoring reasons why experts have said there need to be stronger restrictions — especially in the U.S., which lacks a comprehensive federal data privacy law. 

“It’s one of the greatest bipartisan failures in recent history, the inability of Congress to pass remotely basic data privacy legislation that protects children, consumers and others,” Cressey said. 

Jessica Brandt, policy director for the artificial intelligence and emerging technology initiative at the Brookings Institution, said the lack of a comprehensive data privacy law in the U.S. puts the nation “out of step with its peers.”

“I don’t know that it’s the panacea or only policy approach that can mitigate some of the harms that generative AI can pose, but it’s one that we absolutely need and that Congress should pursue,” she added. 

Quick evolving tech leads to game of ‘catch up’

Since it is not yet clear how all the applications or potential harms may arise, Brandt said policymakers should look to take possible approaches now and update plans as more is learned about how the tech can be used. 

Policymakers are already weighing national security concerns and other risks posed by generative AI. The Senate Homeland Security Committee is holding a hearing about artificial intelligence risks and opportunities Wednesday morning, and OpenAI’s CEO Sam Altman earlier this year met with lawmakers including tech regulation-focused Sens. Ron Wyden (D-Ore.) and Mark Warner (D-Va.).

Guidelines and policy approaches may have to quickly adapt to meet the moment of where the technology is. 

“What we’re seeing right now is Wright Brothers, and we’re flying now in generative AI, which is fantastic, but we’re nowhere near close to where we think we’re going to be in a few years where the cycle from the Wright brothers to the jet engine in this domain is going to be rapid. And we’re constantly going to play catch up,” Cressey said.

Tags AI ChatGPT Microsoft Paul Rosenzweig

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.