Is artificial intelligence more formidable than nuclear weapons?
Of all the potentially new and revolutionary technologies, artificial intelligence (AI) may be the most disruptive of all. In layman’s terms, AI refers to systems able to perform tasks that normally require human intelligence, such as visual and speech recognition, decisionmaking and, perhaps one day, thinking.
Thinking? AI has already defeated the world’s best chess and Pokemon GO players. Suppose AI surpasses the intelligence of human beings. What then?
Could AI’s super-intelligence cure cancer, enhance wellbeing, redress climate change and deal with many of the planet’s worst evils? Or might a super-smart AI turn on mankind as portrayed in the Terminator movies? Finally, is the potential of AI being exaggerated?
Albert Einstein described the universe as “finite but unbounded.” That definition could fit AI’s future applications. But how will we know?
Perhaps the only comparable disruptive technology was nuclear and thermonuclear weapons. These weapons irreversibly disrupted and changed the nature, conduct and character of the politics of war. The reason: no winners, only victims and losers, would emerge after a thermonuclear holocaust eviscerated the belligerents.
What then are the common links?
Nuclear weapons provoked often-fiery debate over the moral and legal implications and when or how these weapons could or should be employed from a counterforce first strike against military targets to countervalue retaliatory roles against population and industrial centers and “tactically” to limit escalation or rectify conventional arms imbalances. AI has reignited debate over equally critical questions and issues about its place in society.
Nuclear weapons ultimately led to a doctrine and “rules of the game” to deter and prevent their spread and use partly through arms control. Will AI lead to a regulatory regime or is the technology too universal for any governing code?
Nuclear weapons are existential. Are there conditions under which AI could become as dangerous? Proliferation of these weapons led to international agreements to prevent their spread. Will that apply to AI?
It was argued that if one side gained superiority over another, conflict or more aggressive behavior would follow. Does AI raise similar concerns?
Important differences exist. Nuclear weapons affected national security. AI most certainly will affect the broader sweep of society, as have the industrial and information revolutions with positive and negative consequences.
Second, the destructive power of these weapons made them so significant. AI, at this stage, needs an intermediary link to exercise its full disruptive power. However, ironically, as societies became more advanced, those two revolutions had the unintended consequence of also creating greater vulnerabilities, weaknesses and dependencies subject to major and even catastrophic disruption.
COVID-19, massive storms, fires, droughts and cyberattacks are unmistakable symptoms of the power of the new MAD — Massive Attacks of Disruption. AI is a potential multiplier by its ability to interact with these and other disruptors, exploiting inherent societal weaknesses and vulnerabilities and creating new ones as well as preventing their harmful effects.
Last, unlike nuclear weapons, if used properly AI will have enormous and even revolutionary benefits for the human species.
The critical question is what mechanisms can identify what former Defense Secretary Donald Rumsfeld called “the known knowns; known unknowns; and unknown unknowns” regarding AI.
A national AI commission just completed its effort. Commissions often can bury a tough topic. The 9/11 Commission did stellar work. But only a portion of its most important recommendations were implemented. Forming the Department of Homeland Security and the Office of the Director of National Intelligence did not bring needed reform because those agencies ultimately expanded the layering of an already bloated government bureaucracy.
That criticism aside, instead of a new AI commission, a permanent AI oversight council with a substantial amount of research funding to examine AI’s societal implications must be created. Membership should be drawn from the public and the legislative and executive branches of government.
Funding should go to the best research institutions, another parallel with nuclear weapons. During the Cold War, the Pentagon underwrote countless studies covering all aspects of the nuclear balance. The same must apply to AI but with wider scope.
This council must also coordinate, liaise and consult with the international community, including China, Russia, allies, friends and others to widen the intellectual aperture and as confidence building measures.
By employing lessons learned from studying the nuclear balance, not only can AI’s potentially destructive consequences be mitigated. More importantly, if properly utilized, as Einstein observed about the universe, AI has nearly unbounded opportunity to advance the public good.
Harlan Ullman, Ph.D, is United Press International’s Arnaud deBorchgrave Distinguished Columnist. His latest book, due out this year, is “The Fifth Horseman and the New MAD: The Tragic History of How Massive Attacks of Disruption Endangered, Infected, Engulfed and Disunited a 51% Nation and the Rest of the World.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.