The views expressed by contributors are their own and not the view of The Hill

We must address the AI risks right in front of us today

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)

ChatGPT — the generative artificial intelligence (AI) app that can answer obscure questionswrite computer codecompose haikustutor algebra and engage in eerily humanlike conversation — has activated Silicon Valley’s hyperbole machine. 

Sam Altman, chief executive of OpenAI, the start-up that designed ChatGPT with funding from Microsoft, told a TV interviewer, “This will be the greatest technology humanity has yet developed.”

Not everyone agrees. Forty-two percent of CEOs polled at a Yale University corporate summit in June said they’re concerned that artificial intelligence could potentially destroy humanity within the next five to 10 years.

Who’s right? To answer that question, Congress is holding hearings, Senate Majority Leader Chuck Schumer (D-N.Y.) is promising a raft of legislation and President Biden is meeting with industry experts.

Let’s clear the air. First, ChatGPT and rival generative AI systems do not, in and of themselves, constitute a threat to the existence of humankind. But the technology does create serious immediate risks. These include the facilitation of political disinformation and cyberattacks, amplification of racial and gender bias, invasions of personal privacy and proliferation of online fraud.

Emphasizing these more discrete hazards makes sense even if one harbors anxiety that, if left unchecked, advancing AI may one day pose existential dangers. If we want to grapple effectively with potential threats to humanity, the best way to start is to regulate the AI risks right in front of us.

A few words of background: Generative AI developers feed mountains of data scraped from the internet into mathematical systems called neural networks, which are trained to recognize statistical patterns in the information. One type of network called a large language model (LLM) is trained to analyze all manner of online text: Reddit posts, digitized novels, peer-reviewed scientific studies, tweets, crowdsourced Wikipedia entries and much more. By observing patterns, an LLM gradually develops the ability to formulate prose, computer code and even conversation. There are also systems that generate images and audio.  

Generative AI has promising applications in health care, education and scientific research. But a new report published by the Center for Business and Human Rights at New York University’s Stern School of Business enumerates a series of societal hazards:

To address these dangers, tech companies can take a variety of steps, including:

For their part, Congress and regulatory agencies can begin by ensuring that existing criminal, consumer protection and privacy laws are enforced in cases involving generative AI. Lawmakers should go further, enhancing the authority of the Federal Trade Commission or a new stand-alone agency to oversee digital industries, including AI companies.

Congress has failed in recent years to pass legislation mandating more disclosure by the social media industry. It must return to the task while broadening its field of vision to include AI. At the same time, lawmakers need to try again to pass a privacy law that would give consumers more control over their personal information. Finally, Congress should bolster public sector and academic AI computer research capacity so that regulators and university scientists can keep up with their private sector counterparts.

We can’t afford to repeat the mistakes made with social media, which grew into a virtually unregulated industry based on cheerful marketing about promoting free speech and personal connections. Without much oversight, Facebook, Twitter and YouTube became havens for misogynist and racist trolls, Russian disinformation operatives and Jan. 6 insurrectionists.

Today, we have an opportunity to get ahead of the problems that generative AI will almost certainly create. The tech industry needs to be held accountable, no matter how lucrative its innovations are.

Paul M. Barrett is the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business. Justin Hendrix is an associate research scientist and adjunct professor at NYU Tandon School of Engineering and the CEO and editor of Tech Policy Press.