OpenAI unveils framework to protect against ‘catastrophic’ AI risks

FILE – The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston. Campaigns will be required to clearly state that political advertisements airing in Michigan were created with the use of artificial intelligence under legislation expected to be signed in the coming days by the Democratic Gov. Gretchen Whitmer. The use of AI-generated deepfakes within 90 days of an election will be prohibited without a disclosure identifying the media as manipulated. (AP Photo/Michael Dwyer, File)

OpenAI, the artificial intelligence (AI) company behind the popular AI-powered chatbot ChatGPT, unveiled a framework to protect against the “catastrophic risks” posed by the rapidly developing technology.

“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” OpenAI said in a Monday news release. “To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework.”

The framework aims to track the catastrophic risk levels of OpenAI’s models on several fronts — cybersecurity; chemical, biological, nuclear and radiological threats; persuasion; and model autonomy — and score them accordingly.

The scores will then be used to determine whether models can be developed further or deployed. Models that have a “high” risk score or lower can continue with development, while those that score below “medium” can be deployed, according to the framework.

“The central thesis behind our Preparedness Framework is that a robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment,” OpenAI wrote in its framework.

The framework also calls for additional security measures for models that have reached or are forecast to reach “high” or “critical” risk levels, including restricting access to models and deploying them only in restricted environments.

OpenAI also plans to establish a Safety Advisory Group — appointed by the CEO or a designated person in consultation with the company’s board of directors — that would oversee risk assessment.

The release of the framework comes just weeks after OpenAI’s board of directors ousted CEO Sam Altman in a surprise move that spurred an open revolt at the company. After hundreds of employees threatened to quit, OpenAI brought back Altman and formed a new board.

Tags Artificial Intelligence ChatGPT OpenAI Sam Altman

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more