Senators press OpenAI over safety concerns after whistleblower complaint
Several senators pressed OpenAI for answers Monday about its safety and employment practices after a group of whistleblowers filed a complaint alleging the company blocked staff from warning regulators about the risks of its artificial intelligence (AI) technology.
Led by Sen. Brian Schatz (D-Hawaii), the group of mostly Democratic senators asked OpenAI CEO Sam Altman about the AI startup’s public commitments to safety, as well as its treatment of current and former employees who voice concerns.
“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems,” Schatz, alongside Sens. Ben Ray Lujan (D-N.M.), Peter Welch (D-Vt.), Mark Warner (D-Va.) and Angus King (I-Maine), wrote in Monday’s letter.
“This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies,” they continued.
The startup behind the popular AI chatbot tool ChatGPT has come under increased scrutiny after The Washington Post obtained a complaint filed by several whistleblowers with the Securities and Exchange Commission (SEC) earlier this month.
The whistleblowers alleged that OpenAI gave its employees restrictive employment, severance and nondisclosure agreements that required them to waive their federal rights to whistleblower compensation and penalized them for raising concerns with regulators.
“Given the risks associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement authorities,” the whistleblowers wrote in their complaint.
In Monday’s letter to Atlman, the senators asked the OpenAI CEO to confirm that the company will not enforce permanent non-disparagement agreements for its employees and to commit to removing any other provision that could be used to penalize employees for publicly speaking out.
“If not, please explain why, and any internal protections in place to ensure that these provisions are not used to financially disincentivize whistleblowers,” they added.
An OpenAI spokesperson told The Hill “artificial intelligence is a transformative new technology and we appreciate the importance it holds for U.S. competitiveness and national security. We take our role in developing safe and secure AI very seriously and continue to work alongside policymakers to establish the appropriate safeguards going forward.”
OpenAI has also made changes to its departure process to remove non-disparagement terms, according to the spokesperson.
Updated at 1:26 p.m. ET.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.