Several bipartisan bills introduced in Congress aim to mitigate the spread of nonconsensual explicit images made using artificial intelligence (AI), an issue that has not only plagued public figures and celebrities, but everyday people and even kids.
“The past year, it’s really been a new thing where it’s forced itself — where we’ve got a real big problem,” said Ann Olivarius, the founding partner of McAllister Olivarius, a transatlantic law firm specializing in cases of race and gender discrimination.
In January explicit AI-generated images made to look like Taylor Swift circulated online, bringing mass attention to the issue. The outcry inspired lawmakers and the White House to push platforms to enforce their rules and prevent their spread of such images.
While the spread of the Swift deepfakes put a spotlight on the rise of nonconsensual AI porn, the issue has become more widespread. Schools have even been forced to grapple with the new form of cyberbullying and harassment as students create and spread deepfakes of their peers in a largely unregulated space.
“It’s impacting tons of everyday people,” she said.
Lawmakers have also been victims. Rep. Alexandria Ocasio-Cortez (D-N.Y.), who is one of the lawmakers spearheading a bill to fight explicit deepfakes, spoke about being targeted by nonconsensual explicit deepfakes herself in an April interview with Rolling Stone.
The issue is drawing in support from lawmakers across the political spectrum. One of the bills, the Defiance Act, is led by Ocasio-Cortez and Senate Judiciary Committee Chair Dick Durbin (D-Ill.), while another, the Take It Down Act, is led by Sens. Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.).
Olivarius said the support on both ends is striking.
“It’s looking like we might have something here finally that lawmakers can agree upon or enough to actually pass,” she said.
Read more in a full report coming to TheHill.com.