The views expressed by contributors are their own and not the view of The Hill

To battle deepfakes, our technologies must track their transformations

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. There is rising concern that U.S. adversaries will use new technology to make authentic-looking videos to influence political campaigns or jeopardize national security.

In the video record of the Ukraine war, a clumsy attempt to “deepfake” Ukrainian President Volodymyr Zelensky coexists alongside critical on-the-ground video evidence of abuses, pervasive misinformation for grift and attention, and Russian false flag operations

These scenes from the war provide a glimpse into a future where, alongside existing forms of manipulation and misattribution, deepfake technology — images that have been “convincingly altered and manipulated to misrepresent someone doing or saying something that was not actually done or said” — will be more readily employed. More false videos will be forged and the ‘liar’s dividend’ will be used to cast doubt on authentic videos. 

One set of solutions to these current and future problems proposes to better track where media comes from, what is synthesized, edited or changed, and how. This ‘authenticity and provenance’ infrastructure deserves close attention to its possibilities and preventative work on its risks. 

In January, the Coalition for Content Provenance and Authenticity (C2PA) led by the BBC, Microsoft, Adobe, Intel, Twitter, TruePic, Sony and Arm, proposed the first global technical standards for better tracking what content is authentic and what is manipulated. The specifications provide a way to follow the origins and changes to a piece of media content from capture on a camera to editing to distribution by major media outlets or on a social media feed. Companies are now starting to create tools based on these underlying standards. 

We must do more to make available authenticity infrastructure — what’s happening in Ukraine underscores this need. However, we must keep front and center global high-risk contexts and usages, critical human rights principles and potential risks — and that’s why we at WITNESS joined C2PA and led its Threats and Harms taskforce

The question must always be, how do we ensure these technical solutions for enhancing confidence in media help rather than harm? Based on nearly five years of working at the forefront of this issue, we identify four core issues: the need for anonymity and redaction, the role of platforms and opt-in signals, secure access in global contexts and the need to avoid abuse by governments.  

Building trust in content must allow for anonymity and redaction. A human rights documenter in Ukraine cannot be required to publicly share their name and exact location as a prerequisite for use. They must also be able to blur faces or create redacted versions of their media. Immutability and inability to edit are recipes for disaster. Lessons from platform policies around ‘real names’ tell us many people — for example, survivors of domestic violence — face similar circumstances that we should learn from. We support how the C2PA specifications focus on protecting privacy and don’t require identity disclosures, but we should be cautious of how implementation may compromise this. 

Platforms must likewise consider how they integrate these types of signals. Facebook, TikTok, YouTube and other platforms currently offer no ready consumer-oriented options to help viewers understand if videos or images are being decontextualized and misconstrued. They provide neither reverse image and video search that shows previous versions of media existing online nor signaling based on the C2PA specifications. However, turn to #Ukraine, and we see videos from years ago or different locations claimed as current, and audio remixed over different videos on TikTok. Increased transparency about how media is created and where it comes from is a critical aid to media literacy skills. 

Finally, the C2PA emphasizes that understanding a video’s origins and edits doesn’t tell you if the media is factually accurate or editorially reliable, and we agree. Any credential on content must be an aid to help make informed decisions, not a simplistic truth signal. We must always view these credentials through the lens of who has access in diverse global and security contexts. We cannot discredit a citizen journalist or ordinary person in a city under siege for not using tools like these, while we buttress the content of a mainstream journalist for a state-sponsored television channel that does use it. Their journalism can be foundationally unreliable even if their media is well-documented from a provenance point of view.  

In the United States, Sen. Rob Portman’s (R-Ohio) Deepfake Task Force Act proposes research around digital provenance, and responsible governments should consider how to use the C2PA and similar standards. But passing a ‘fake news’ or online safety law to mandate the usage of these technologies would be a terrible step in the wrong direction, undermining the necessity that the technologies are opt-in and non-coercive.

These standards will be abused and loopholes will be found by bad actors. Yet a partial solution is still a step in the right direction. At WITNESS we’ve led an initiative over the past four years focused on the increasing risk of deepfakes. The challenges of understanding what we see and hear will grow as the trends in improving digital fakery pile onto existing problems of misidentified and mischaracterized ‘shallowfakes’ — images that are selectively altered with photo and video editing tools.   

Emerging authenticity and provenance infrastructure can help address our growing crisis of belief in audiovisual media. Now the key is to uphold human rights values of equitable access to technology, freedom of expression and privacy. We need guardrails and watchdogs for technologies of trust. These tools will succeed or fail based on how they serve the most vulnerable people defending their truth in the streets of Ukraine or the hinterlands of many other conflict zones: They have the most to gain and the most to lose. 

Sam Gregory is the director of programs, strategy and innovation of WITNESS, a human rights organization with global experience working on how people in crisis ensure their videos are trustworthy and more evidentiary.