ChatGPT chief warns AI can go ‘quite wrong,’ pledges to work with government

Senators on both sides of the aisle questioned the CEO of the company behind the popular ChatGPT tool on Tuesday about the risks of artificial intelligence (AI) technology as OpenAI and rival companies race to release new AI powered services. 

Tuesday’s Senate Judiciary subcommittee hearing, the first in a series of expected AI hearings, showcased a degree of bipartisan unity as lawmakers begin mulling regulation.

Republicans and Democrats asked largely similar questions about the risks of AI, ranging from intellectual property concerns to the impact on jobs. 

OpenAI CEO Sam Altman pledged to work alongside the government and the rest of the industry to move forward with a path that maximizes the benefits of the tech and minimizes the wide-ranging risks. 

“My worst fears are that we — the field, the technology, the industry — cause significant harm to the world. I think that can happen in a lot of different ways,” Altman said. 

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,” he added. 

Senators on the panel largely focused their questions at the AI oversight hearing on the risks from the technology across the board — especially to jobs — rather than grilling Altman specifically on what OpenAI is doing. 

Altman said that like with most technological revolutions, he expects a “significant impact” on jobs, but the impact is hard to predict at the moment. 

He said GPT4, the model powering ChatGPT, will “entirely automate away some jobs” but also create new ones. 

“GPT4 is a tool not a creature,” he said. “GPT4 and other systems like it are good at doing tasks, not jobs, so you see already people that are using GPT4 to do their job much more efficiently,” he continued.

The rise of generative AI and ChatGPT

ChatGPT burst onto the scene in November and has had an explosive increase in users. In April, it attracted 1.76 billion visits worldwide, according to data from Similarweb. Microsoft, which invested heavily in the company, incorporated ChatGPT into its tools, including its Bing search engine. 

At the same time, rival companies, including tech giant Google, and startups, are also expanding on generative AI technology. Senators underscored the sense of urgency to regulate AI based on the surge of generative AI products hitting the market. 

To showcase the potential uses and risks of generative AI, Sen. Richard Blumenthal (D-Conn.) kicked off the hearing with a recorded opening statement that used an AI-powered audio cloning tool to deliver a statement — written by ChatGPT — in his likeness.

Blumenthal said there are potential benefits to AI innovation, such as helping with cures for cancer to modeling the climate. But he warned that lawmakers need to focus on maximizing the good while limiting the bad and not repeat the failures of regulating social media at its dawn. 

“Congress failed to meet the moment on social media, now [we] have the obligation to do it on AI before the threats and the risks become real,” he said. 

Sen. Josh Hawley (R-Mo.), ranking member of the subcommittee, said a year ago the panel wouldn’t have even had this hearing, given that ChatGPT had yet to be released to the public, showing how rapidly the technology is changing and could transform the world “right before our very eyes.” 

AI’s impact on jobs, creator rights and elections 

Lawmakers on both sides of the aisle pressed Altman, as well as the other witnesses — NYU professor emeritus Gary Marcus, and IBM’s vice president and chief privacy and trust officer  Christina Montgomery — on a range of risks AI poses. 

Several lawmakers brought up concerns about the impact AI could have on jobs. 

Marcus said the technology will have a “profound” effect on labor, but the full impact may happen across multiple decades rather than in the shorter term. 

Senators also raised concerns about intellectual property laws, based on the content AI models are built on and the types of results it produces. The issue was raised across sectors, with Sen. Marsha Blackburn (R-Tenn.) raising it in terms of how it impacts compensation to artists, and Sen. Amy Klobuchar (D-Minn.) asking about the impact it could have on local newspapers. 

Altman said OpenAI is working with visual artists and musicians to consider different options that could give creators better control over what happens to their creations. 

He also said he hopes that ChatGPT could help news organizations, and local outlets. 

Heading into the next major election cycles, senators also questioned how ChatGPT and other generative AI tools could be used to proliferate the spread of misinformation.

Concerns were raised both about AI powered chatbots delivering false information when users query basic information about polling locations, as well as how the technology could be used to learn about users and create persuasive misinformation campaigns. 

Altman said one clear way to help combat concerns is if users know if they’re interacting with an AI generated content or not. He said some regulation would be wise, but that companies can also voluntarily adopt some changes. 

He also said that the more recent versions of GPT have become increasingly accurate, as well as being trained to not respond to certain questions in an attempt to mitigate risks that have risen. 

Senators mull regulation options 

The roughly two-hour hearing was also less contentious than previous Senate hearings featuring CEOs of tech companies, such as Meta and Google.

Blumenthal called Altman’s appearance “night and day” compared to other tech CEOs. 

“He was very, very constructive and informative and contributed immensely,” Blumenthal said. 

“I was really very heartened and encouraged by the quality of the testimony here today, and the participation of my colleagues which … was very bipartisan,” he added. “And there’s, I think, a strong degree of consensus about developing the basic principles and the means to implement them.” 

Although senators aimed to dig into the potential risks, and mitigation strategies, there’s still uncertainty over what type of regulatory approach they would take to address those risks. 

Blumenthal said lawmakers need to grapple with the “hard questions” that were “raised successfully” but not necessarily answered. 

He said the hearing is the first in a series of hearings they expect to hold, which could include Google and other companies in the industry as well as experts in the field. 

One option lawmakers weighed to mitigate risks is creating a new agency tasked with regulation and oversight. But that agency would have to be given the resources to make it effective, Blumenthal said. 

“Clearly the [Federal Trade Commission] doesn’t have the capability right now. So if you’re going to rely on the FTC you are going to have to, in effect, clone it within itself so to speak. I think there’s a powerful argument for an entirely new angelic that is given the resources to really do the job,” he said. 

Hawley also raised the potential to create a type of federal right of action that would allow individuals to sue companies if they allege being harmed by AI. 

Traditional social media companies are covered by Section 230 of the Communications Decency Act, which keeps them from being held liable for content posted by third parties. It is not clear, however, if Section 230 protects generative AI content, but there is no law specifically indicating if users have the right to sue companies over harm from AI. 

Blumenthal said a proposal to add guardrails for AI could happen this session. He noted Senate Majority Leader Chuck Schumer’s (D-N.Y.) framework to add guardrails, as well as the White House release of an AI Bill of Rights framework as indications of movement to add regulation. 

“There [are] certainly a lot of legs for a bill, and a lot of momentum, and there’s a clear need. People are excited, but also anxious, with good reason,” he said.

Tags Josh Hawley Richard Blumenthal

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.