The views expressed by contributors are their own and not the view of The Hill

Perception won’t be reality, once AI can manipulate what we see

iStock

Voice-spoofing technology was used to steal a quarter-million dollars in March from the unwitting CEO of an energy company, who thought he was talking to his (German) boss. A recent study showed that 72 percent of people reading an AI-generated news story thought it was credible. In September, a smartphone app called Zao became a viral sensation in China; before the government abruptly outlawed it, Zao allowed people to seamlessly swap themselves into famous movie scenes.

Then there is that infamous case of doctored video of the House Speaker Nancy Pelosi (D-Calif.) that went viral before being detected as being manipulated to make her appear drunk.

Most of the recent advances in AI — artificial intelligence — have come in the realm of perceptual intelligence. This has enabled our devices to see (and recognize faces of our friends, for example), to hear (and recognize that song) and even to parse text (and recognize the rough intent of the email in your mailbox). Today’s AI technology can also generate these percepts — our devices can generate scenes and faces that never existed, clone voice to generate speech, and even write pithy (if stilted) responses to the emails in your inbox.

This ability to generate perceptions puts AI in a position of great promise and great peril. 

Synthetic media can have many beneficial applications. After all, inducing suspension of disbelief in the audience is the cornerstone of much of entertainment. Nevertheless, it is the potential misuses of the technology — especially going under the name of “deep fakes” — that are raising alarms.

If perception is reality, then what happens to reality when AI can generate or manipulate perceptions? Although forgeries, fakes and spoofs have existed for much of human history, they had to be crafted manually … until now. The advent of perceptual AI technology has considerably reduced the effort needed to generate convincing fakes. As we saw, the Zao app allowed lay users to swap themselves into movie scenes. What is more, as the technology advances, it will become harder to spot the fakes. Sites such as  “Which Face is Real?show that, already, most people cannot tell AI-generated images from real ones.

Easy generation and widespread dissemination of synthetic media can have quite significant adverse consequences for many aspects of civil society. Elections can be manipulated through spread of deep fake videos that put certain candidates in compromising positions. Spoofing voice and video calls can unleash a slew of new consumer scams. Individual privacy can be invaded by inserting people’s likenesses into compromising (and sometimes pornographic) pictures and videos.

What are our options in fighting this onslaught of AI-enabled synthetic media? To begin with, AI technology itself can help us detect deep fakes by leveraging the known shortfalls in the current AI technology; there are techniques that spot fake text, voice, images and video. For example, in the case of images, fakes can be detected by imperceptible pixel-level imperfections or background inconsistencies; it is hard for most fake-generators to get the background details correct. (In much the same way, when we remember our dreams in the morning, the parts that don’t make sense are often not the faces of the people but, rather, the background story.) For detecting fake videos of people, current techniques focus on the correlations between lip movements, speech patterns and gestures of the original speaker. Once detected, fake media can be added to some global databases of known fakes, helping with their faster identification in the future.

Beyond detection, there are incipient attempts at regulation. California recently passed Assembly Bill 730 making deep fake videos illegal — providing some measure of protection against invasion of individual privacy. Twitter is establishing its own guidelines to tag synthetic media (deep fakes) with community help. Non-profit organizations like Partnership on AI have established steering committees to study approaches to ensure the integrity of perceptual media. Other technology companies, including Facebook and AI Foundation, have supported gathering and sharing benchmark data sets to help accelerate research into deep fake detection. AI Foundation has released a platform, called Reality Defender 2020, specifically to help combat the impact of deep fakes on the 2020 elections.

While policies are important, so is educating the public about the need to be skeptical about perceptions in this age of AI. After all, the shortcomings of the generation technology today are not likely to persist into the future. In the long term, we should expect AI systems to be capable of producing fakes that cannot be spotted either by us or by our AI techniques. We have to gird ourselves for a future where our AI-generated doppelgangers may come across as more authentic to our acquaintances. Hopefully, by then, we will learn not to trust our senses blindly and, instead, insist on provenance — such as cryptographic authentication techniques — to establish the trustworthiness of what we perceive. Asking our loved ones on the phone to provide authentication may offend our sense of trust, but it may be the price we will have to pay as AI’s ability to generate and manipulate media becomes ever more sophisticated.

As deep fakes increase in sophistication, so will our immunity to them: We will learn not to trust our senses, and to insist on authentication. The scary part of the deep fake future is not the long term but the short term, before we outgrow our “seeing is believing” mindset. One consolation is that the short term may also be the only time when AI can still be an effective part of the solution to the problem it has wrought in this vulnerable period.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter @rao2z.

Tags Artificial intelligence cybersecurity Deepfake Nancy Pelosi Perception Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.