The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence: Beware the tech imperative

Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images
This picture taken on Jan. 23, 2023, in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT, a conversational artificial intelligence software application developed by OpenAI.

“Any sufficiently advanced technology is indistinguishable from magic,” science fiction icon Arthur C. Clarke sagely opined. Judging from the excitement, awe and fear generated by ChatGPT4, that mystery captures the growing concern about artificial intelligence (AI).

The bot, which can write research papers, fiction and much else, is the latest example of the technological imperative: If something can be created and commercialized, it will be — regardless of a dearth of rules, regulations and, too often, potential risk. It is a product of the Silicon Valley utopian ethos: “Move fast and break things.” Fix it later. The numerous mistakes and bizarre answers from ChatGPT4 suggest it is no exception.

Open AI, Google and a host of other Big Tech and start-up actors are racing to commercially deploy the best AI chatbot. Technology is rocketing forward years faster than most expected — before big questions and its impact on humanity or any governance have been duly considered.

When Sam Altman, CEO of OpenAI, which created the ChatGPT4, says, “I’m a little bit scared of this,” and when more than 1,000 leading technologists call for a moratorium on further research and development of it, it’s worth paying attention.

What prompted the call for a timeout from top technologists like Elon Musk and Apple co-founder Steve Wozniak is the alarm, as their open letter says, that “AI systems with human-competitive intelligence can pose profound risks to society and humanity….”

AI is the most disruptive technology since the internet and, almost certainly, more revolutionary. The internet’s creators saw it as an absolute good — opening up instant global flows of information. Insidious social media, hacking and cybercrime was not on anyone’s radar. The same sort of techies are driving the untamed AI train. Big questions about AI’s transformative nature are unanswered: Will AI be manipulated as a fount of disinformation? Will it destroy jobs? Will it replace humans?

AI is simply algorithms and data aided by the speed and memory of computer processing. Huge amounts of text and images are inputted, and the machine learns patterns from each cycle of data and teaches itself. It’s less a thing than an enabler of things, like electricity.

The future will be AI+ everything — business, industry, agriculture, biosciences, etc. It is a transformational technology that is changing society. Henry Kissinger worries that it will destroy reason – how humanity has understood the world since the enlightenment – and alter human cognition.

With each iteration of algorithm, its knowledge grows exponentially. It can analyze elements and discover new chemical compounds and medicines. But its processes are a mystery: We don’t know how it reached its result. That makes it very difficult for humans to control.

For instance, AI can be, and in some cases already is, being used to process job applications, college admittance and insurance rates. AI can be a good guide for human decisions but lacks human experience, judgment and consciousness. AI is only the sum of its algorithms, and some may be biased. AI cannot discern if a prospective employee or school applicant is not what their resume advertises, or if they might be obnoxious, as a human face-to-face conversation might.

Looking ahead, the speed of decisions by autonomous weapons will make it very difficult to achieve the goal of most AI ethics proposals: having a human in control. But if a human cannot act quickly enough or understand why AI has decided to take an action, disaster could result. Machines make mistakes.

Such realities suggest that a six-month pause in leading-edge AI experiments may not be enough. The academic, scientific and technological communities have considered AI ethics and developed codes of ethics. But there is no comprehensive set of laws or regulations to guide AI applications in the U.S.

The European Union (EU) has put forward a complete legal and regulatory framework for AI applications, as has China. But the U.S., which lacks even a national privacy law, lags behind. The White House has released a “Blueprint for an AI Bill of Rights.” It has similar provisions to the EU regulations, but it is merely a proposal. The U.S. lacks a set of laws and regulations. Addressing AI applications is happening piecemeal, court case by court case.

There is an increasingly urgent need for a full set of laws and regulations for AI applications in the U.S. and globally. Absent global rules, the world risks a race to the bottom. The open letter has sounded the alarm. It remains for governments to act.

Robert A. Manning is a distinguished fellow at the Stimson Center. He previously served as senior counselor to the undersecretary of State for global affairs, as a member of the U.S. secretary of state’s policy planning staff and on the National Intelligence Council Strategic Futures Group. Follow him on Twitter @Rmanning4.

Tags Amazon Artificial Intelligence Artificial intelligence Artificial Intelligence rights big tech Big tech ChatGPT Ethics of artificial intelligence Google OpenAI tech industry

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All