Technology

AI whistleblowers warn of dangers, call for transparency

The OpenAI logo is seen displayed on a cell phone in front of an image on a computer screen generated by ChatGPT on Friday, Dec. 8, 2023. (AP Photo/Michael Dwyer)

A group describing itself as “current and former employees at frontier AI [artificial intelligence] companies” called for more transparency and protections for whistleblowers in the industry in a letter Tuesday.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter reads. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter is signed by those who label themselves as current and former employees of companies including OpenAI, Google’s DeepMind and Anthropic, with six of them choosing to remain anonymous.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter continues. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”

The letter specifically requests that “advanced AI companies” go along with “principles,” including not going “into any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concern” and being in favor of “a culture of open criticism.”

“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public,” the letter continues. “However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

OpenAI announced last week that it was establishing a safety panel to make recommendations to its board about “critical safety and security decisions,” headed up by its CEO Sam Altman alongside directors Adam D’Angelo, Nicole Seligman and Bret Taylor.

OpenAI is “proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” a spokesperson for the company told The Hill in an emailed statement.

“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” the spokesperson continued.