The views expressed by contributors are their own and not the view of The Hill

Deepfake victims must punish Big Tech because Congress won’t

Life-sized cutouts depicting Facebook CEO Mark Zuckerberg wearing "Fix Fakebook" T-shirts are displayed by advocacy group, Avaaz, on the South East Lawn of the Capitol on Capitol Hill in Washington, Tuesday, April 10, 2018, ahead of Zuckerberg's appearance before a Senate Judiciary and Commerce Committees joint hearing. (AP Photo/Carolyn Kaster)

Facebook (now Meta) raised international eyebrows in 2016 when it sought to dismiss a lawsuit by a 14-year-old girl from Northern Ireland whose nude image was uploaded without her consent to a so-called “shame page.” It was a curious move by an organization that claims to take these issues seriously.  

Since then, advancements in technology have opened dangerous new doors for predators determined to weaponize social media by committing online acts of sexual violence. Yet Meta and other online platforms haven’t changed their posture; in fact, they’ve lobbied Congress to escape accountability for the behavior of bad actors on their networks.  

That 2016 case is ever-relevant today as social media companies enjoy blanket protection over the growing problem of AI “deepfakes” — images of real people transformed into lifelike pictures and videos depicting them saying and doing things that never happened.  

There was a time, not long ago, when an altered picture — “photoshopped” was the common term — could be spotted a mile away. AI’s evolution has made detecting a real image from a doctored one nearly impossible.  

Concern over AI deepfakes has largely focused on their use in perpetrating election interference in this year’s U.S. presidential race. But they raise a more depraved problem that should scare all of us: deepfake pornography, where software programs accessible by a simple online search can turn an innocent image of an unwitting individual into a sexualized scene or video that can be posted online without consent.  

Taylor Swift, along with thousands of other celebrities, was the target of one. Innocent children are being victimized by it, too. 

Not long ago a 14-year-old boy notified the National Center for Missing & Exploited Children that someone had threatened to post a deepfake pornographic image of him if he didn’t pay the perpetrator ransom.  

Another 14-year-old girl from New Jersey told the New York Times she was “summoned” to her school’s assistant principal’s office, where she was told some of her male classmates had used an AI program to turn a clothed picture of her into a naked image that was posted online.  

She recalled the shame she felt when boys started laughing at her in the hallway. Images of other girls at the school were altered and shared as well.  

One of the victims has filed a lawsuit against a male classmate allegedly involved. It’s unclear if anyone was disciplined, but it’s safe to say the boys at the school aren’t laughing anymore. 

Local, state and U.S. officials are behind the curve in addressing this problem. 

All school districts are grappling with the challenges and impact of artificial intelligence,” read the uninspired statement of Superintendent Raymond González in response to the New Jersey incident. A recently introduced state bill spurred “dozens” of other deepfake victims to come forward.  

Social media companies can do something about it, but some simply choose not to. The New York Times’s Nicholas Kristof found that Google and Bing searches for deepfake pornographic content returned a high number of faked celebrity sex videos while Yahoo’s search engine returned none. Comparatively, a Google search on suicide returned no results on how to commit the act but rather offered listings where people can go to get help.  

“In other words,” Kristof posits, “Google is socially responsible when it wants to be, but it seems indifferent to women and girls being violated by pornographers.”   

Social media companies have proven they can’t police themselves, but Congress can. 

They can amend the Communications Decency Act to hold social media companies liable when deepfake pornographic images are published on their platforms. They can pass bills, such as the Preventing Deepfakes of Intimate Images Act and the Shield Act, which would make the circulation of deepfake pornography a crime. They can pass the Defiance Act, which would enhance deepfake pornography victims’ rights.  

But Congress has gutlessly failed to act on any of these measures.  

Social media platforms could be part of the solution. They could ally with victims and show leadership and compassion by enacting tougher measures to censor deepfake content. They could set an example by pushing for stricter laws and regulations to prevent faked sexual images from being searchable or uploaded on the networks they host.  

Instead, they seek absolution of the problem. They fight to protect themselves from liability while victims are shamed on their platforms for something they never intended, and never authorized, that will haunt them for the rest of their lives.  

Facebook lost its attempt in 2016 to have the case dismissed and wound up settling with the 14-year-old girl from Belfast. It should be a message to every other victim of revenge and deepfake pornography on these platforms: Sue them and hit them where it hurts.  

As long as Congress remains impudent in standing up to them and as long as social media companies fight tooth and nail to evade responsibility, victims should drown them in litigation.  

Maybe then, and only then, they’ll get the message that the burden is on them to solve this crisis. 

Lyndon Haviland, DrPH, MPH, is a distinguished scholar at the CUNY School of Public Health and Health Policy.