House Intel to take first major deep dive into threat of ‘deepfakes’
The House Intelligence Committee is gearing up to host one of the first congressional hearings specifically focused on examining the threat of so-called deepfake videos manipulated by artificial intelligence to appear strikingly real.
The hearing, slated for Thursday morning and featuring academics and other experts, comes amid warnings that such technology poses a major disinformation threat ahead of the 2020 presidential election.
{mosads}“I think we all have to be much more skeptical consumers of what we see online,” House Intelligence Chairman Adam Schiff (D-Calif.) told The Hill. “By the time you can tell that it’s a fake, the damage has already been done. So we want to try to inform the public about this so that if it does occur, when it does occur, they’ll have some background about the product.”
Fear about disinformation threats follow the 2016 presidential election, when Russia created fake accounts on social media aimed at sowing divisions and stirring tensions.
As other actors seek to follow the Kremlin’s playbook, deepfake technology is growing more sophisticated and prevalent. It’s possible an average internet user by the 2020 election could create doctored videos so realistic forensic experts will have to verify whether the content is real.
“We aren’t just worried about Twitter bots and fake Facebook accounts anymore,” Rep. John Ratcliffe (R-Texas), a member of the Intelligence Committee, told The Hill.
“It is important for us to recognize that the threat of misinformation campaigns is evolving. And with the development of AI and deepfakes, the effort to insert fake news into our media is more sophisticated than ever.”
Top U.S. intelligence officials including Director of National Intelligence Dan Coats testified before Congress in January that hostile foreign actors are expected to try to weaponize deepfakes to sow discord and breed doubt.
“Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners,” reads the intelligence community’s 2019 Worldwide Threat Assessment.
The potential problem of doctored videos returned to the spotlight last month when a fake video of Speaker Nancy Pelosi (D-Calif.) made its way across Facebook and other social media platforms in late May, receiving millions of views.
The video, edited to make Pelosi appear as if she was slurring her words, was then amplified on Twitter by President Trump. And while the video was not produced with advanced AI technology, Schiff and other experts say that will be the next step in the disinformation threat — particularly if public figures say manipulated material is fair game.
“We got a real wake-up call with this cheap fake that was pushed out about Speaker Pelosi, and something that the president has pushed out on his Twitter feed,” Schiff told The Hill. “There is what’s called a liar’s dividend for those who are not wedded to the truth. It’s a great new phenomenon.”
{mossecondads}While the chairman says it is “a tough area to legislate around,” he emphasized that there is a difference between parody and the intent to harm when asked about Facebook’s decision not to pull down the video.
“This isn’t offered as a parody software, it is to deceive people into thinking ill of the Speaker. I would hope that that kind of fraud would violate the terms of any social media company’s platform,” Schiff said.
“We’re going to need the companies that fuel the propagation of this content to act as good corporate citizens. If 20 million people see a fake video, but only 15 million people ever hear that it’s a fake, that means 5 million people are misled.”
Facebook CEO Mark Zuckerberg became the target of a deepfake video on the eve of the Intelligence hearing. The video uses fake audio to make it appear he is saying “whoever controls the data, controls the future.”
Facebook decided not to take that video down either.
Experts have previously warned The Hill that such videos could threaten the integrity of an election if, for example, one falsely depicting candidates making controversial statements goes viral on the eve of an election.
Rep. Rick Crawford (R-Ark.), also a member of the committee, said he could see scenarios in which deepfakes manipulate comments coming from world leaders, sparking responses from adversaries and allies alike.
“I could see it being manipulated and used for nefarious purposes and really create problems for us in a national security perspective,” Crawford said.
Schiff also voiced concern that as the partisan battle lines grow more entrenched, it will become increasingly more difficult to convince critics that a certain video is not real, depending on who is the target of the manipulated content.
The emergence of deepfakes, he said, will create an environment where individuals can claim real videos are fake, or claim forged content is real, depending on their motivations.
“In that environment, people just fall back on party affiliation or other associations as a proxy for the truth. And that’s a dangerous thing for democracy,” Schiff said.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.