The views expressed by contributors are their own and not the view of The Hill

AI is here to stay, but who is in charge? 

Economic and social euphoria over the emergence of nonbiological forms of intelligence —commonly known as generative artificial intelligence or AI — have captured the imaginations of businesses and individuals alike.  

Whether you are an AI accelerator or a “doomer,” you know that something monumental is happening that may change the shape and trajectory of the future.  

In the last two years, Big Tech companies have poured hundreds of billions of dollars into the development of powerful AI systems as hundreds of start-ups have been launched. If money could talk, it would say that AI is the future. And yet 56 percent of Fortune 500 companies say AI poses a risk to their business model.  

Much like the frequent question asked in Ayn Rand’s “Atlas Shrugged” — “Who is John Galt?” — we should be asking, “Who is in charge?” 

There is no doubt that, like the internet, AI will positively affect every aspect of life, driving revolutions in healthcare, defense, transportation, finance, communications, food production, manufacturing and the knowledge business itself. But like most tech, it brings with it a form of peripheral blindness that seems to obscure our instinct to weigh the enhancements and threats evenly before using a new product. With tech, the question is usually, “How fast can I download it?”  

For example, 100 million starstruck enthusiasts rushed to use ChatGPT in its first 60 days, even though the overwhelming majority likely did not know who its creator was, how large language models work or what data it might be extracting from them. Could they even be sure it was not a massive subterfuge by Chinese hackers? Most consumers spend more time researching the purchase of a microwave oven.

AI can exponentially magnify the untrustworthiness of an insecure internet and hand malevolent users even more powerful ways to defraud, cheat and steal on a scale never seen before. The AI Risk Repository recently documented over 700 potential risks that advanced AI systems and intelligent agents could pose, including system safety, bias, proliferation of falsehoods, alternative deep-fake universes, emotional addiction and shrinking privacy. It claims that 90 percent of the risks will not be detected until deployed.

So again, who is in charge? 

A revolution this significant will alter the world’s economic underpinnings, and it may not be all good news. While little to no productivity increases have been detected so far, long-term increases are expected. But Goldman Sachs economists point out that as many as 300 million full-time jobs could be lost or diminished globally because of the rise of generative AI. 

Noted futurist Ray Kurzweil explains in his latest book, “The Singularity is Nearer,” that those jobs may never return. That will force governments to deal with a population that is increasingly unemployed as machines conceptualize, research, think, create and complete work more effectively and efficiently. Who is in charge? 

Governments may also have to be fundamentally transformed to match the changes that occur as machines alter how societies function.  

Private-sector machine intelligence will increasingly have to be regulated by public-sector algorithms, which will offer governments vast new opportunities to exert control. Authoritarian nations such as China are learning how to exert that control using facial recognition cameras, GPS tracking, cellphone apps and social scoring algorithms, which they are happily exporting around the world.

Even the most democratic governments may eventually come to realize that they must limit their use of AI so that political parties will not be tempted to use it to solidify and maintain their power. 

Ultimately, some expect increasing humanization of nonbiological intelligence that will replicate human consciousness. If we reach that point, we will need new laws both to govern machine consciousness and reward it for social behavior. Conscious machines may one day qualify for Social Security, unemployment insurance and healthcare benefits. While such forms of machine intelligence may still be many years or decades away, if there is a 10 percent chance that they will arrive, we should be planning for them now.  

While much that AI will do will be for the better, humans will increasingly become less capable of determining their own futures, as massive amounts of personal data continue to be collected and life-altering decisions are made before they even realize it. If governments are constantly playing catch-up, the value of any oversight they can offer will be modest at best. Are we prepared to cede critical decisions about our futures to unelected tech titans such as Elon Musk (X), Sam Altman (OpenAI), Jeff Bezos (Amazon), Sundar Pichai (Google) and Mark Zuckerberg (Meta/Facebook)?  

As with most things, the future probably lies somewhere between the euphoric predictions of AI accelerators and the machine apocalypse predicted by doomers. But one thing seems clear — the future we want will have less chance of coming to fruition if we don’t take charge of writing the script and stop defaulting to an inevitability over which we have no control.  

Democratic nations working together have the power to create the rules and implement the governance standards that can keep people in charge of their own lives and use technology to make technology safe.

Those who believe we will choose people this election season who will lead on these issues can sleep well at night. For my part, I don’t sleep much these days.   

Who is in charge? 

Thomas P. Vartanian is the Executive Director of the Financial Technology & Cybersecurity Center and author of the new book, “The Unhackable Internet.”