The views expressed by contributors are their own and not the view of The Hill

The real issue with artificial intelligence: The misalignment problem

Getty Images

Breathtaking advances in technology, from genetic engineering to quantum computing, have opened policy vistas and security challenges that were completely unanticipated even five years ago. The next decade will bring smaller devices, larger networks and anthropomorphic computers that will extend human thought — where they don’t replace human thought — beyond, literally, imagination or belief.  

Although AI-doomsday forecasts designed to stoke public anxiety make great headlines and popular podcasts, from the perspective of many Ph.D.-level scientists and engineers the life-under-borg predictions are strangely overwrought. One of the reasons artificial intelligence (AI) captures so much attention is that it, like satellite navigation and drug discovery, is hardly distinguishable from magic. Large language models like Bard, Copilot and ChatGPT sound like a real person, which makes their wizardry even more fascinating. But they are fraught with errors and sweet-sounding hallucinations, and they will never be infallible. 

Our obsession with AI diverts attention and energy away from more imminent and transcendent threats to our society and global human progress. The dangerous misalignment is not of moral values between people and computers, but between people and their ideological opponents. Irrefutable facts and valid (if mistaken) opinions have been replaced by deliberately false ideas injected into our discourse like a potent and addictive narcotic of delusion. If we cannot agree on objective and repeatable scientific insights, or a true historical record, how will we collaborate in the best long-term interests of the country or our planet? Today there is nothing that filters the shibboleths from the facts that are fed to AI computers.  

The real hazard is not machine-derived calamity. It is bad human decisions that are accelerated and amplified by AI. There are plenty of things we think we know, from calculating financial risk to determining criminal recidivism, that in the immortal words of Mark Twain, “just ain’t so.” Training computers based on discriminatory precedent is irresponsible at best and prejudicial at worst. Repairing flawed ideology in human memory or computer storage is wickedly difficult, and it takes time to focus ethical lenses in both media. 

In the real world, and for all of pre-broadcast history, new information or edicts, provable or not, sustainable or not, diffused very slowly. The worst ideas, designed to oppress, exclude, incite, and subjugate, were eventually extirpated, sometimes painfully, from the social system. Good ideas — including the demolition of the bad ones — take even longer, but eventually succeed. As Mahatma Gandhi, Martin Luther King Jr., Golda Meir, and Nelson Mandela reassure us: The defeat of harmful structures is always just a question of time. The ultimate strategic intelligence is that authentic liberty is renewable but not self-executing. It needs debate and nourishes criticism; AI is capable of neither.  

Another deficit, and a source of contamination, is the weaponized misinformation inserted by foreign interests into our popular press and social media. Those pathogens are ingested into the training sets that teach generative platforms how to speak and what to say. AI has neither ambition nor judgement. It is just advanced — and impressive — pattern recognition. Unless we are much more careful and deliberate, it will be years before we expunge toxic spew from the training sets and align them to our expectations and laws. 

Finally, the global market and (until recently) our national security depend on sophisticated components that come from China. We taught them ourselves. Policymakers from both parties expected the Middle Kingdom to become a large market and friendly competitor. Instead they are a fierce commercial rival and America’s most worrisome military antagonist. They already train almost 10 times the number of engineering students we do and will soon produce twice as many engineering Ph.D.s. The AI misalignment here is that they have more of it than we do. 

The clear and present danger is not artificial intelligence. It is the integrity of its training. 

Like a real brain, AI only “learns” what we teach it. Today’s computer models are vulnerable to absorbing wrong ideas and disproven theories about science, history, economics and philosophy. This is no different than schools that promote creationism, holocaust denial, mercantilism, and oppression theories cloaked as real science. Dumb ideas are being embedded into massive computer memories (now about as big as a human brain) that indiscriminately produce conclusions that sound real but cannot be independently validated, traced, checked or challenged. The real-world implications are identical: spiritual superstition, entrenched suspicion, and fabricated conflict.  

AI has no imagination; it is a mix master of ideas — some good, some bad — that we have already considered. Sometimes the results are interesting, like a new chess move or a previously unseen protein fold, and sometimes they’re ridiculous. But hand wringing over AI itself will lead nowhere. Instead, we should focus on a far-superior policy, suggested by the fabulous title of the most influential computer science paper of the last 10 years: “attention is all you need.”  

No machine created this misalignment, and only human ingenuity will solve it. Our attention should be on listing the “ingredients”, just like we already do with food, gasoline, medicines and clothes. We need to make sure that we’re teaching these machines things that are scientifically proven, socially aligned and integrity tested for both accuracy and fairness. 

Peter L. Levin is adjunct senior fellow in the Technology and National Security program at the Center for a New American Security, and CEO of Amida Technology Solutions Inc. 

Tags Artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.