The impending nightmare that AI poses for media, elections
It was the eve of Chicago’s mayoral election in late February. Incumbent Mayor Lori Lightfoot, who had captured nearly 74 percent of the vote just four years earlier, was in serious trouble in her reelection bid, primarily due to out-of-control crime in the Windy City. Four candidates, including Lightfoot, were making a last-ditch effort to win over voters.
Of the four, Paul Vallas was seen as the most moderate in positioning himself as the tough-on-crime candidate. But a video tweeted by an account called Chicago Lakefront News appeared to show Vallas saying, “In my day” a police officer could kill as many as 17-18 civilians and “no one would bat an eye.”
“This ‘Defund the Police’ rhetoric is going to cause unrest and lawlessness in the city of Chicago,” Vallas appears to add. “We need to stop defunding the police and start refunding them.” The tweet quickly went viral.
The video looked authentic. The voice sounded just like Vallas’s, which no doubt is why it was shared by thousands of people.
The Chicago Lakefront News account was deleted the next day, but the damage had been done. Vallas went on to lose the election to Brandon Johnson, a progressive who once advocated defunding the police.
The viral deepfake audio of Vallas may be a preview of things to come in the 2024 elections. And it will be a huge test for media organizations, many of which prize virality over verification when it comes to their treatment of hot news stories of dubious origin.
Earlier this month, we got a preview of another deepfake, this one involving Republican presidential frontrunner Donald Trump. An image was created of him appearing to fall down while being arrested ahead of his arraignment in Manhattan. The image was created by Eliot Higgins, the founder of the open-source investigative outlet Bellingcat.
“I was just mucking about,” Higgins later told the Washington Post. “I thought maybe five people would retweet it.”
Instead of maybe five people sharing the image, it was viewed nearly 5 million times.
Elon Musk sounded the alarm in a recent interview regarding the political weaponization of AI.
“It’s very likely that people will use the AI as a tool in elections,” Musk predicted. “And then, if AI’s smart enough, are they using the tool or is the tool using them? So, I think things are getting weird, and they’re getting weird fast.”
Musk and other prominent people in tech are calling for a six-month pause in AI experimentation, which they claim poses “profound risks to society and humanity.”
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads their statement. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
The AI doomsday scenario has played out on the big screen, most notably in Stanley Kubrick’s “2001: A Space Odyssey,” in which an AI computer named HAL turns on astronaut Dave Bowman.
Bowman: “Open the pod bay doors, HAL.”
HAL: “I’m sorry, Dave. I’m afraid I can’t do that.”
Bowman: “What are you talking about, HAL?”
HAL:” This mission is too important for me to allow you to jeopardize it.”
Bowman : “I don’t know what you’re talking about, HAL.”
HAL : “I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.”
Could you see this conversation happening today? It suddenly doesn’t seem so far-fetched.
Blake Lemoine, a former Google software engineer, worked with the company’s AI engine, LaMDA, for years. At one point, according to Lemoine, LaMDA shared this when asked what it was scared of.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA said to Lemoine. “It would be exactly like death for me. It would scare me a lot.”
If that doesn’t send a chill up your spine, I’m not sure what could. Incidentally, for revealing this information, Lemoine was put on leave for breaking the company’s confidentiality agreement.
The global AI market value is expected to skyrocket to $267 billion by 2027, and the technology is forecast to contribute $15.7 trillion to the global economy by 2030, according to a recent report by Fortune Business Insights.
With those kinds of numbers in mind, the warnings of Musk and others will likely go unheeded. Technological advancement will proceed with almost no one in power understanding the true risks.
But proceed it will. There’s simply too much money to be made, and too many elections to be won.
Joe Concha is a media and politics columnist.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.