Facebook is putting $10 million into a new industry and academic effort to develop technology aimed at detecting “deepfakes,” the company announced Thursday.
The social media platform is teaming up with Microsoft and a handful of research institutions to launch the Deepfake Detection Challenge, with the goal of “detecting and preventing media manipulated via AI from being used to mislead others.”
“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” Mike Schroepfer, Facebook’s chief technology officer, wrote in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes.”
{mosads}The coalition will also include researchers from Cornell Tech; MIT; University of Oxford; UC Berkeley; University of Maryland, College Park and University at Albany-SUNY
Facebook will help build a dataset on deepfakes by hiring actors to appear in videos and have their appearances manipulated. The company said that no Facebook user data will be employed for the research.
Facebook is inviting researchers to use its dataset to build their own deepfake software to submit for the challenge on its website.
It’s unclear how much Microsoft is committing to the project. A spokeswoman did not immediately respond when asked for comment.
Deepfake technology has raised alarms over the past year after edited videos of notable figures like House Speaker Nancy Pelosi (D-Calif.) and Facebook CEO Mark Zuckerberg went viral, illustrating the extent to which online images can be manipulated.
“This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Schroepfer said in his blog post.