The views expressed by contributors are their own and not the view of The Hill

Regulation of AI’s heartbeat: a race against time for humanity

Getty Images

Rapidly advancing AI offers endless promise for society’s most vexing problems. Each day, hour, and minute, the systems grow more intelligent, powerful, and capable. AI models will expedite the development of lifesaving, society-shaping drugs in the coming years. AI platforms will forecast extreme weather events and changes in climate patterns with greater accuracy.

Hyper-mechanized algorithms will soon be able to analyze data from “internet of things” devices across vast landscapes to optimize irrigation, fertilizer use, and pest control, leading to better crop yields and more efficient food distribution for developing countries. 

Still, the promise to transform society also throws open the door to two catastrophic risks: misuse by malicious actors, and the potential for Machine Learning systems to break free from the control of their developers. The mitigation of these risks requires regulating the hardware underpinnings of AI. 

The dangers of misuse by malicious actors range from hyper-powered disinformation campaigns on a massive scale to devastating cyberattacks. The former is already easily accomplished with current large language models paired with an army of chatbots. Either could sow chaos, confuse the population, or harm another group of humans. The weakening of the American democratic systems, a disruption of the nation’s military response structure, and a cataclysmic stock market crash are all feasible with sufficient AI tools in the hands of malign human actors. 

The second risk category, of a runaway AI, represents a threat to all humanity. Here, the risk is posed not by humans but by self-improving AI training other AI. In this nightmare scenario, AI trains itself to gain advantage over, harm, or defeat all humans

Today, AI does not pose an existential risk to humanity. However, AI has the potential to surpass human capabilities in unprecedented ways, making it challenging to predict or ensure control over its continued advancement or proliferation. An AI system trained to advance its intelligence and modeled on human behavior may develop its own goals beyond those prescribed by its human overlords. 

Such a system could then train other AI systems to deceive human programmers. AI could then gain control over resources, power, and influence. In the worst case, colossal disruptions in international economic markets, large-scale power outages, and even military force against human could eventually fall within the capability of AI systems.  

While the White House and the leading American AI firms recently agreed to establish guardrails on the technology, there is as yet no specific legislation or regulations for AI. Mitigating these catastrophic threats demands a strategic and forward-thinking approach to AI regulation. It also requires the U.S. to lead an international effort. As with the fight against the most deleterious results of climate change, unilateral action is insufficient here. Such an endeavor must prioritize the control and governance of computation, a fundamental component of AI research and development. 

Computation is the lifeblood of AI innovation: the more computational power a developer possesses, the more potent and influential its AI creations. Advanced AI model training demands extensive computation, often relying on graphics processing units or other specialized chips to refine neural networks. Limiting access to these advanced chips or restricting bulk purchases of standard ones can impede AI development. 

The regulation of computing power allows a defense against the risk of bad actors gaining access to substantial computational resources and unleashing powerful AI technologies for nefarious purposes.

Hardware is the only component of AI development the government can reasonably regulate. Data and algorithms can be transmitted and stored in ways that are undetectable by government monitoring systems. AI chips, however, are tangible, trackable, and manageable. Additionally, computational resources are non-duplicable in real-time. Algorithms can be easily replicated and shared, but only one user can utilize a specific chip at any given moment. Moreover, only a few key players dominate the complex global chip production process, suggesting that cohesive coalitions could set safety benchmarks for AI chip imports. 

To prevent unchecked AI advances, regulators should oversee high-performance chip acquisitions. Measures like that imposed last year by the Biden administration, to restrict exports of advanced chip-manufacturing equipment to China, represent a type of computational regulation to curtail specific nations, companies, or groups from training sophisticated models. 

In 1957, in the early moments of the nuclear arms race, President Eisenhower inspired the international community to develop the International Atomic Energy Agency, which oversees and regulates transfer of materials such as enriched uranium that can be used to build a nuclear weapon. Similarly, the U.S. must today lead the global community to develop an international body to regulate the transfer of large volumes of chips as a hedge against the most destructive powers of neural networks. 

The metaphorical doomsday clock of AI ticks ever closer to midnight. Every advancement in AI, while promising unprecedented solutions, also presents potential perils that inch the minute hand forward.

Regulation of computational resources is not just a preventive measure; it is a mechanism to turn back the clock, granting us the precious time needed to harness AI’s potential safely and responsibly.

The U.S. must lead the development of an international body to that end. Just as we’ve taken collective action in the past to confront existential threats, we must now rally to ensure the dawn of AI benefits humanity rather than endangers it. The consequences of failure may be disastrous for all of us. 

Joe Buccino is a retired US Army Colonel who serves as an AI research analyst with the U.S. Department of Defense Defense Innovation Board. His views do not necessarily reflect those of the U.S. Department of Defense or any other organization.

Tags AI

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.