Technology

OpenAI says it disrupted Iranian influence operation using ChatGPT

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Jan. 31, 2023.

OpenAI said Friday it disrupted an Iranian influence operation that was using ChatGPT to generate content related to the U.S. presidential election and other topics. 

The network known as Storm-2035 used the company’s chatbot, powered by artificial intelligence (AI), to create content, including “commentary on candidates on both sides in the U.S. presidential election,” that was then shared on social media. 

OpenAI has banned the accounts from using its services. It emphasized that the operation “does not appear to have achieved meaningful audience engagement,” with the identified social media posts receiving few or no likes, shares or comments. 

“Notwithstanding the lack of meaningful audience engagement resulting from this operation, we take seriously any efforts to use our services in foreign influence operations,” OpenAI wrote in a blog post.  

“Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders,” it added. 

The operation used ChatGPT to generate long-form articles that were published to five websites posing as progressive or conservative news outlets, as well as to write short social media comments in English and Spanish from accounts on the social platform X and Instagram posing as both progressives and conservatives. 

The content mainly focused on the conflict in Gaza, Israel’s presence at the Olympic Games and the U.S. presidential election, although some focused on Venezuelan politics, Latino rights in the U.S. and Scottish independence, according to OpenAI.  

The accounts used by the operation interspersed this content with “comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following,” the AI startup noted. 

OpenAI’s disruption of this Iranian influence operation comes after former President Trump’s campaign said last weekend that some of its internal communications were hacked by “foreign sources hostile to the United States.” 

Trump’s campaign pointed to a report from Microsoft on Iran’s influence operations targeting the 2024 election, which revealed that Iranian hackers “broke into the account of a ‘high ranking official’” on a presidential campaign in June. 

The same report also featured information on Storm-2035, noting that it was “masquerading as news outlets” and “actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” 

OpenAI noted in Friday’s blog post that it “benefited from information about the operation published by Microsoft last week.”