AI-generated music is officially a hit — how will it change the music industry?
Last week, an anonymous music producer released a song, “Heart on My Sleeve,” with vocals ostensibly sung by music artists Drake and The Weeknd but actually generated using artificial intelligence (AI). The track became a viral hit, in part because of the name brands associated with the project but also because of the curiosity factor. Given the ChatGPT-induced panic about AI, it is tempting to think of this as a seismic event. But we contend that no musical Rubicon has been crossed.
No, AI hasn’t breached the music business yet.
It is true that no A-listers were involved in the making of the hit track. It is equally true that both music fans and reviewers gave the song a thumbs-up. Some are even saying that the fake trumps the original. The crux is that the lyrics, the rhythm, the beat and the vocals could pass off as the real deal. One of us is a hit record producer and a seasoned composer, and concurs. The end product is entertaining and passes a music Turing test of sorts: it elicits an emotional reaction in the listener.
What happened next surprised no one. Hell hath no fury like a recording label scorned. The music label in this case, Universal Music Group (UMG), maintains that the music track violates the copyrights that it owns. After all, the AI was trained on its data — UMG’s copyrighted music. It is no surprise that the offending song was promptly deleted from streaming platforms and social media sites. Recording labels have armies of lawyers to work out new revenue-sharing formulas with streaming platforms and will play whack-a-mole with any recalcitrant would-be content providers and AI music tools. This is definitely music to lawyers’ ears.
What are the legal grey areas for future AI-generated music? If doing voice impressions is not banned, then what’s so wrong about using AI to generate vocals when it is properly disclosed as such? Does it contravene the artist’s right of publicity? Does it violate the exclusive rights of music labels to use the artist’s name and likeness? In the current legal framework, human and synthetic voices cannot be copyrighted; new rules must be framed concerning brand likeness and the use of emerging AI tools.
So, AI’s got talent? Not so fast. It’s not like image generator AI tools Dall-E 2 or Midjourney, for which you provide a prompt and then receive a desired image. In this case, a human was needed to bring together the track production, lyrics and chord structures. AI assisted in sourcing the raw materials, but the brunt of the labor was in the curation. Clearly, the result is a hybrid of organic human creativity and synthetic machine intelligence. Although fully integrated AI music generators exist, they have yet to satisfy any music Turing test.
We are OK with autotune to fix vocal inconsistencies and make a singer’s voice sound better. We are OK with lip synching in concerts. We are OK with some sampling and with unnamed musicians playing instruments. We are OK with ghostwriters and hit-making producers. So why are we drawing the line at AI-created vocals?
Twenty years ago, Napster brought out the ire of the music industry establishment when its revenue was threatened. What does the future hold in three years or in five years when AI tools become much better? Will machines encroach further and further into one of humanity’s greatest creative endeavors? Pundits predicted that AI would be automating accounting, finance, medicine and law, but we did not expect creative endeavors like writing, painting and music to follow suit so fast.
The real question will be whether AI-generated music can elicit an emotional response. For music to resonate, there will have to be a deeper form of connection and meaning. Feeling, moving, experiencing — an automated process lacks these. AI can potentially create a higher-value proposition for the real thing. But like knock-offs in fashion, there will be a market for mid-brow auto-songs and a market for hand-crafted haute couture music.
With AI’s assistance, we can then better use real, organic, human-made art as a way to access our limbic and subconscious selves. There is hope. What we have in abundance is creativity, spirituality, and that highest order of art’s existence: human expression and empathy. Artists should see AI as an opportunity, and extension of their brands. Future music fans are going to design the music that they want to hear. The creative class must learn to adopt AI and leverage their skills effectively, not lament the inevitable.
Fernando Garibay, a former executive, producer and artist at Interscope, is founder of the Garibay Center, a global creativity research institute. He has produced hit records for Lady Gaga, Whitney Houston, Shakira, Sia, Britney Spears and Paris Hilton, among others.
Kashyap Kompella is CEO of RPA2AI Research and a visiting professor for artificial intelligence at BITS School of Management (BITSoM).
James Cooper is professor of law at California Western School of Law in San Diego and served as a U.S. delegate to the World Intellectual Property Organization’s Advisory Committee on Enforcement.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.