The views expressed by contributors are their own and not the view of The Hill

Deepfakes are a problem — So what is the solution?

iStock

“Deepfakes” is the latest scary buzz word circling Capitol Hill. It’s basically a fake video made to look real.

But “deepfakes” is really just a new word for “photoshopping” of digital images. It can be putting a face on someone else’s body with the intent to deceive, or it might be an obvious attempt at satire (like John Snow apologizing for Season 8 of Game of Thrones).  

Photoshopping and misreporting have been around for decades. The famous radio broadcast of War of the Worlds was a deepfake. The Blair Witch Project was a deepfake.

Politicians worry that now, social media allows deepfakes to spread more quickly and broadly than ever. Some are using the deep concern over deepfakes to do an end-run around the First Amendment — to stop the spread of misinformation they consider unflattering.

Viewers can’t always tell if a video is a fake, but the good news is that technology can identify when an image or video has been edited. The real problem is not deepfakes, but more, the spread of misinformation.

Rather than addressing the underlying problems of misinformation or deepfakes, some in Congress want to leverage this threat to attack the tech industry and seize control of our ability to use online platforms to post content.

User-created content platforms have empowered us to reach friends and family in ways never before realized. We can share our own news and views broadly, and see what others are sharing. We can read and post reviews of businesses, helping us separate the good from the bad. We can organize for social and political change with the click of a button. All this is made possible by Section 230 of the Communications Decency Act, which allows platforms to host our content without being responsible for our errors.

Unsurprisingly, traditional news media and broadcasters absolutely abhor the ability for citizens to share and access their own news and views. They want to return to the days where we relied on highly educated and self-proclaimed “experts” to give us the news and advise us about what to think.

Lawmakers seem to realize that it’s a terrible idea to entirely repeal Section 230. So instead, there’s a plan to chip away the law with seemingly “reasonable changes.” But the changes being discussed on Capitol Hill are anything but reasonable.  

The latest idea is for Congress to pass a law demanding platforms maintain “reasonable moderation practices” in order to retain Section 230 liability protection for content posted by their users. This might sound reasonable, but becomes unworkable when you read past the headline and consider the question, “what are reasonable moderation practices?”

Should Yelp be required to verify the truth of every user restaurant and hotel review before it can be posted? That only sounds “reasonable” to those whose restaurant received harsh reviews.

Should Change.org be required to investigate potential problems with every petition that users post? That only sounds “reasonable” if your interests are threatened by the petition.

Should the New York Times be liable for every critique of a politician posted by users in its comments section? That only sounds reasonable if you are a politician being criticized.

Should YouTube be required to remove the video of John Snow apologizing for Season 8 of Game of Thrones? That sounds reasonable only if you are Game of Thrones creators Benioff and Weiss.

It doesn’t take a law professor to explain the danger in qualifying what is and is not reasonable. Now imagine doing that on a scale of billions of pieces of user content each day. The obvious opportunity for abuse and bias here isn’t stopping some anti-tech activists from scaring lawmakers into amending the law and chilling our ability to speak online.

Nonetheless, how do we address the very real problem of “deepfakes” without undermining free speech?

To find a solution we must first identify the problem. The problem is not the First Amendment. Nor is the problem social media or the internet.

The underlying problem is that some people believe what they are told, and we have a news media that can rush to report rather than first verify a story — like the BuzzFeed article about Michal Cohen. Or the news being selective about video clips — like CNN only showing half a video to make President Trump appear rude to the Japanese prime minister.

If the problem lies with us, the solution is not to strip our freedom of expression by amending Section 230. Nor is the solution passing laws to force independent third-parties to determine fact from fiction. Nor is the solution opening up social media platforms to the liability risks brought by trial lawyers looking for a quick payout.

The solution is to better educate Americans that not everything we read and see is true. As has always been true, the antidote to misinformation is more and better information. This requires our journalists to become better fact-finders and restrain their personal bias and opinions.

If our traditional news media reporting were more reliable and unbiased, Americans would be in a better position to differentiate fact from fiction. This is how we overcome disinformation and stop deepfakes.

The answer to deepfakes is a depth of information, not shallow attacks on free speech.

Carl Szabo is Vice President and General Counsel of NetChoice, a trade association of eCommerce businesses (members include Facebook, Twitter, Google and more) promoting convenience, choice and commerce on the net.

Tags Deepfake Digital media Donald Trump Facebook first amendment Google misinformation Twitter User-generated content YouTube

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.