The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence — the arms race we may not be able to control

Getty


“Whoever becomes the leader in this sphere will become ruler of the world,” said Vladimir Putin. The sphere the President of Russia is referring to is artificial intelligence (AI) and his comments should give you a moment of pause.

Addressing students at the beginning of our Labor Day weekend, Putin remarked “Artificial intelligence is the future, not only for Russia, but for all humankind,” adding, “It comes with colossal opportunities, but also threats that are difficult to predict.”

For once, I find myself in agreement with the President of Russia, but just this once.

Artificial Intelligence offers incredible promise and peril. Nowhere is this clearer than in the realm of national security. Today un-crewed systems are a fact of modern warfare. Nearly every country is adopting systems where personnel are far removed from the conflict and wage war by remote control. AI stands to sever that ground connection. Imagine a fully autonomous Predator or Reaper drone. Managed by an AI system, the drone could identify targets, determine their legitimacy, and conduct a strike all without human intervention.

Indeed, the Ministry of Defence of the United Kingdom issued a press statement in September that the country “does not possess fully autonomous weapon systems and has no intention of developing them,” and that its weapons systems “will always be under control as an absolute guarantee of human oversight and authority and accountability.”

Let’s think smaller. Imagine a tiny insect-sized drone loaded with explosive. Guided by a pre-programmed AI, it could hunt down a specific target — a politician, a general, or an opposition figure — determine when to strike, how to strike, and if to strike based on its own learning. Howard Hughes Medical Center recently attached a backpack to a genetically modified dragonfly and flew it remotely.

These examples are, however, where humans are involved and largely control the left and right limits of AI. Yet, there are examples of AI purposely and independently going beyond programed parameters.

Rogue algorithms led to a flash crash of the British Pound in 2016, in-game AIs created super AIs weapons and hunted down human players, and AIs have created their own languages that were indecipherable to humans. AIs proved more effective than their human counterparts in producing and catching users in spear phishing programs. Not only did the AIs create more content, they successfully captured more users with their deception. While seemingly simple and low stakes in nature, extrapolate these scenarios into more significant and risky areas and the consequences become much greater.

Cybersecurity is no different. Today we are focused on the hackers, trolls, and cyber criminals (officially sanctioned and otherwise) who seek to penetrate our networks, steal our intellectual property, and leave behind malicious code for activation in the event of a conflict. Replace the individual with an AI and imagine how fast hacking takes place; networks against networks, at machine speed all without a human in the loop.

Sound far-fetched? It’s not. In 2016, the Defense Advanced Research Projects Agency held an AI on AI capture the flag contest called the Cyber Grand Challenge at the DEF CON event. AI networks against AI networks.

In August of this year the founders of 116 AI and robotics companies signed a letter petitioning the United Nations to ban lethal autonomous systems. Signatories to this letter included Google DeepMind’s co-founder Mustafa Suleyman and Elon Musk who, in response to Putin’s quote tweeted, “Competition for AI superiority at national level most likely cause of WW3 imo (sic)”.

AI is not some far off future challenge. It is a challenge today and one with which we must grapple.

I am in favor of fielding any system that enhances our national security, but we must have an open and honest conversation about the implications of AI, the consequences of which we do not, and may not, fully understand.  This is not a new type of bullet or missile. This is a potentially fully autonomous system that even with human oversight and guidance will make its own decisions on the battlefield and in cyberspace.

How can we ensure that the system does not escape our control? How can we prevent such systems from falling into the hands of terrorists or insurgents? Who controls the source code? How and can we build in so-called impenetrable kill switches?

AI and AI-like systems are slowly being introduced into our arsenal. Our adversaries, China, Russia, and others are also introducing AI systems into their arsenals as well. Implementation is happening faster than our ability to fully comprehend the consequences.  

Putin’s new call spells out a new arms race. Rushing to AI weapon systems without guiding principles is a dangerous. It risks an escalation that we do not fully understand and may not be able to control.

The cost of limiting AI intelligence being weaponized could vastly exceed all of our nuclear proliferation efforts to date. More troubling, the consequences of failure are equally existential.

Mike Rogers is a former U.S. Congressman who represented Michigan’s 8th District. He served as chairman of the House Permanent Select Committee on Intelligence from 2011 to 2015. He is the founder of the Mike Rogers Center for Intelligence & Global Affairs.

Tags Artificial general intelligence Artificial intelligence Computational neuroscience Existential risk from artificial general intelligence future Lethal autonomous weapon Philosophy of artificial intelligence Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more