The Federal Communications Commission (FCC) last week unanimously voted to recognize AI-generated voices as “artificial” under the Telephone Consumer Protection Act, banning them from use.
The vote came shortly after a call with an AI-generated voice impersonating President Biden spread throughout New Hampshire ahead of the state’s primary.
Experts called the FCC ban a welcome first step toward curbing deceptive AI-generated content, but not nearly enough on its own.
“Of course, voice content is very, very important, but it’s just one kind,” said Julia Stoyanovich, an associate professor at New York University’s Tandon School of Engineering.
“We need to be thinking holistically
about AI-generated media and how to regulate the use of such media and how to ban, or have accountability more generally, when these media are used in particular settings.”
Under the Telephone Consumer Protection Act, which restricts the use of artificial or prerecorded voice messages in telemarketing calls, the FCC can fine robocallers and block calls from telephone carriers facilitating illegal robocalls.
“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” FCC Chair Jessica Rosenworcel said in a statement.
“No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”
“That’s why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers,” she added.
Experts and advocates are now boosting pressure on the Federal Election Commission (FEC) to fill the gaps left by the FCC’s robocall ban and ramp up its efforts to regulate AI.
The FCC and other federal regulators have faced steady pressure from the nonprofit group Public Citizen and other advocates calling for AI guardrails
ahead of the 2024 election.
“This rule will meaningfully protect consumers from rapidly spreading AI scams and deception,” said Robert Weissman, president of Public Citizen.
“Unfortunately, through no fault of the FCC, this move is not enough to safeguard citizens and our elections,” he added.
The FCC’s limited scope leaves AI-generated images and videos unregulated as political campaigns and their supporters increasingly use such materials ahead of the election.
Read more in a full report at TheHill.com.