Technology

Facebook whistleblower, AI godfather join hundreds calling for deepfake regulation

A woman in Washington, DC, views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Photo by Rob Lever/AFP via Getty Images)

Facebook whistleblower Frances Haugen, former Democratic presidential candidate Andrew Yang and one of the “godfathers” of artificial intelligence (AI), Yoshua Bengio, were among hundreds who signed an open letter Wednesday calling for deepfake regulation.

More than 400 AI experts, artists and politicians signed the letter, which urged governments to pass laws criminalizing deepfake child pornography and establishing criminal penalties for those who knowingly create or facilitate the spread of harmful deepfakes.

The letter also suggested software developers and distributors be required to prevent their products from creating harmful deepfakes and be held liable if their measures are too easily circumvented.

Deepfakes — which the letter describes as “non-consensual and grossly misleading AI-generated voices, images, or videos, that a reasonable person would mistake as real” — pose growing risks as AI technology has become more widely available. 

“Deepfakes are a huge threat to human society and are already causing growing harm to individuals, communities, and the functioning of democracy,” said Andrew Critch, an AI researcher at the University of California, Berkeley and lead author of the letter, in a statement.

“We need immediate action to combat the proliferation of deepfakes, and my colleagues and I created this letter as a way for people around the world to show their support for law-making efforts to stop deepfakes,” he added.

Explicit AI-generated images of pop superstar Taylor Swift went viral last month, prompting the White House to voice concerns.

“We are alarmed by the reports of the circulation of the … false images,” White House press secretary Karine Jean-Pierre said at the time.

“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people,” she added.

Messages imitating President Biden also went out to New Hampshire voters last month, urging them not to cast their ballot in the state’s primary and highlighting concerns about the potential for election misinformation.

Earlier this month, the Federal Communications Commission banned the use of AI-generated voices in robocalls. The Federal Trade Commission also proposed a rule last week that would prohibit the impersonation of individuals, pointing to the recent rise of AI-generated deepfakes.