The views expressed by contributors are their own and not the view of The Hill

Four rules to make artificial intelligence work for humans

The history of technology is the story of human progress. Every big advance opens up wonderful possibilities and poses pivotal questions. We all sense what a leap forward artificial intelligence (AI) could be. We’re optimistic about its potential but wary of any unintended consequences.

At first glance, music may not be the obvious place to look for signposts of AI’s future. But think of the new worlds unlocked by the inventions like the printing press, phonograph, radio and internet. Each pushed music further and faster as an art form, and proved its power to drive the adoption of new technology. Music has often been the canary in the coal mine. 

AI is no different. Music has already given us glimpses of what this incredible technology can do when artists are on board — whether it’s musicians permitting fans to create new songs using replicas of their voices, artists’ estates empowering posthumous biopics featuring perfect replicas of late stars, or singers who have lost their voice to illness miraculously recording again. 

At the same time, we’ve witnessed the specter of AI’s possible downside for artists: vast scraping and copying of creative works, and the rise of deep fakes where artists’ voices, faces and identities are appropriated without their consent, to depict them doing, saying, and singing things that never happened.

The starkly different positive and negative uses of AI that music is grappling with today represent two alternate versions of the future — with artists at the tip of the spear. 

Every person’s voice and image are as individual as their fingerprints and lie at the very core of their identity and livelihood. This issue of misleading deep fakes is starting to impact everyone — politicians impersonated in deep fake videos, families defrauded by voice clones pretending to be relatives, people placed into pornography without consent, and even schoolchildren victimized with their faces inserted into humiliating scenes. 

Surprisingly, there currently is no federal law that fully protects our identities from use without permission. And that must change as soon as possible.

With my experience at YouTube and Netflix and now Warner Music Group, innovation is in my DNA. However, responsible AI will depend on human beings being able to protect their identities. In order to achieve that, we need four things:

  • Consent: The use of people’s likeness and voice rights to train AI models and create output from them requires consent
  • Monetization: The grant of likeness and voice rights must be subject to free-market licenses

In order to operationalize consent and monetization at scale, we need: 

  • Attribution: AI models must identify outputs through labels and watermarks, which in turn must be recognized by all distribution platforms.
  • Provenance: AI developers must be required to keep sufficiently detailed, publicly available records of the material used to train their models to enable the enforcement of rights.

I am honored to be testifying on April 30 before the Senate Judiciary Subcommittee on Intellectual Property to offer ideas related to Sens. Chris Coons’s (D-Del.) and Marsha Blackburn’s (R-Tenn.) NO FAKES Act proposal — which is aimed at defending the unique human identities of every American. It’s very encouraging that the No AI FRAUD Act, introduced by Reps. María Salazar (R-Fla.) and Madeleine Dean (D-Pa.), enjoys bipartisan support in the House. 

These proposals would establish clear ground rules, helping to prevent a world where AI can appropriate a person’s identity without their permission. This would also have a “halo effect,” helping to curb misinformation, identity theft and hate-fueled content.

Critics will try to muddy the waters by raising supposed implications for news reporting or parodies. And any proposed legislation should expressly acknowledge the First Amendment’s broad protections — while also forbidding serious harms that fall outside them. As always, the music community will stand up for freedom of speech. 

I am excited to see what the future holds; it’s a road we need to travel thoughtfully but swiftly. Ultimately, the question isn’t where AI is taking humanity, it’s where humanity will take AI. Together, we share in that opportunity and responsibility. 

Robert Kyncl is the CEO of Warner Music Group.

Tags AI Deepfakes

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.