As artificial intelligence enters into the music industry, incumbents have already begun to resist the transformation it will inevitably bring. But change is inevitable and it is only those artists who are able to embrace it who will survive the transition.
This is a link enhanced version of an article that first appeared in the Mint. You can read the original here.
One of the more viral music hits of this year was a song called “Heart on My Sleeve” that, in just about a week, managed to rack up half a million plays on Spotify and over 15 million views on YouTube. At that pace, it was well on track to become one of the most popular songs of the year, until, all of a sudden, it was unceremoniously taken down.
At least one big reason for its popularity was the fact that it was being a billed as a collaboration between rap artists Drake and The Weeknd, whose on-again-off-again relationship made their coming together for this single a huge event for their fans. Drake was the one who first pushed the little-known R&B singer into the limelight in 2010. So, everyone was surprised when instead of signing up with Drake’s OVO Sound imprint, The Weeknd ended up with Republic Records. This sparked an online war of words, and things got even more complicated when rumours began to surface that Drake was going out with Bella Hadid, The Weeknd’s former girlfriend.
But, even though all this spicy history between the two artists made the anticipation of a collaboration appealing, it is not the reason this song went viral. That success was entirely down to the fact that even though the vocals on the track were unmistakably those of the two artists in question, neither of them had actually performed on it. Instead, the entire song had been generated using an artificial intelligence (AI) software that had been trained on voice samples of the two artists—without their consent or tacit involvement. This is why it was unceremoniously taken down shortly after it was posted by TikTok user @Ghostwriter977.
This whole incident raised a number of ethical and legal questions that existing laws on intellectual property are ill-equipped to answer, starting with the issue of attribution and royalties. If a song becomes a hit because of the fame of artists who sing it, are those artists not entitled to a share of its profits? After all, it is their voices that listeners are coming to hear. But does anyone have the right to be paid royalties if they do not actually put in any work to sing the song? After all, isn’t the purpose of copyright to reward the labour, skill and capital invested in a creative endeavour? These are issues we have never really had to engage with before, since these things have only become possible on account of the recent advances in generative artificial intelligence (AI).
But before we engage with these legal issues in a meaningful way, it is important to reflect on some of the benefits that this new technology offers. Recently, music producer Timbaland released a preview of what might become possible. In a short video, he talks about his long-standing desire to collaborate with The Notorious B.I.G, an artist who had been killed in a drive-by shooting 25 years ago, and about how AI has made it possible for him to create a song with Biggie’s voice over a Timbaland beat.
More recently, Paul McCartney confirmed that he was able to use AI to rescue John Lennon’s voice from low-quality cassette recordings and enhance it sufficiently to be able to use it to create the last ever Beatles album—more than four decades after his band-mate’s death.
Given how AI-based voice technologies can make these sorts of incredible collaborations possible—bringing into existence incredible artistic collaborations that would otherwise not even have been conceivable—it would be a shame if, in making amendments to our intellectual property laws, we end up stifling all this creativity.
As important as it is to give artists adequate protection over their voice—which to many is their primary creative asset—we need to ensure that we do not in the process destroy the many outcomes that these new technologies make possible.
Embrace the Change
One way of thinking about this might be the approach that the musician Grimes is looking to adopt. Instead of shoring up her legal defences, Grimes has decided to embrace the AI-based voice cloning revolution. In a tweet, she said that she would have no problem with anyone using her voice to create an AI-generated song so long as they are willing to split 50% of all revenues they make on the song with her for the privilege of using her voice. This, she says, is the deal she offers any artist who collaborates with her.
I like this approach because it places AI-based music composition on par with other forms of musical collaboration, encouraging creative experimentation while still offering the original artist a share of the revenue. Avant-garde musician Holly Herndon has taken the idea one step further. She has built an AI model of her voice and allowed anyone to use it for their own art. To facilitate this, she is releasing an AI tool called Holly+, using which anyone can upload polyphonic audio data and receive a download of that music sung back to them by an AI clone of her voice.
AI has well and truly entered the music industry. As with everything else it has touched, it will dramatically alter the way in which music is created. Purists will—as they always have—resist this change, looking to amend the law to protect what they see as their creative right. But this technology is here to stay, and even if older artists resist, the newer generations will embrace it willingly. The new avenues of creativity and collaboration that this opens up will come to define the future of music.
We can choose to resist the inevitability of that future, or, like Grimes and Holly Herndon, embrace it.