Sign up for the Today newsletter
Get everything you need to know to start your day, delivered right to your inbox every morning.
The country song sitting at No. 1 on Billboard’s Country Digital Song Sales chart last week — “Walk My Walk” by an artist called Breaking Rust — wasn’t created by a real person. The gravelly Southern drawl belting out lines like “I’m rough, I’m raw, I’m wild” was generated entirely by artificial intelligence.
The ultra-realistic AI-made track, which has racked up more than 4.6 million streams on Spotify, has ignited fierce debate online about the future of music and what happens when machines start topping the charts.
Inside Edition reports that the song was created using Suno, an AI-music platform founded and based in Cambridge. The milestone comes just weeks after Bloomberg reported that Suno is in talks to raise more than $100 million from investors — a deal that could value the company at over $2 billion, quadrupling its previous valuation.
Suno did not return a request for comment. But to break down what this means for musicians, listeners, and the industry at large, Boston.com spoke with Berklee College of Music professor Jonathan Wyner, who teaches an entire course on AI and music.
Suno AI creates music by using artificial intelligence to convert text prompts into complete songs with vocals and instruments.
By analyzing the user input, from a descriptive text, custom lyrics, or even a song tune, the AI creates original music with vocals, instruments, and beats, in a variety of genres based off the musical styles and patterns it learned from vast datasets.
“It’s still really early days in understanding how this plays out from almost every perspective,” said Wyner. “The technology is still improving, but imperfect. For anybody who’s paying attention, it’s pretty easy to tell the difference between human-made music and machine-made music.”
Wyner says AI is a good tool for people who want to be creative and make music. And workflows for use of the technology are beginning to develop.
For example, if you have a song but wonder what it would be like to have a horn section in the chorus, AI models can generate a horn section for you in about eight seconds, and give you a good idea whether that will work or not.
Instead of a text prompt, a user can prompt a machine with audio, like a voice memo. A user can test it with different singing voices and phrases. Wyner warns, though, that the voices don’t sound 100% human.
“I think it’s useful to think of it as a technology or a tool that can be used for many things, as opposed to an end in itself.”
Artists are already using these tools. Wyner said that for most new popular music released in the past few months, listeners can assume AI played some role in the production.
Artists rarely acknowledge it, he added — partly because it often doesn’t matter to the creative process, and partly because they fear being dismissed or “canceled” for using it.
Wyner said a major concern is whether AI models have been trained on music that’s been taken without permission.
There’s no universal legal ruling yet, but recent cases suggest that unauthorized scraping is often not considered fair use, while training on copyrighted material that was legally obtained may be allowed.
Universal Music Group recently went from suing Udio (another AI music company) and Suno, to partnering with them to use UMG material to train AI models. Udio now also has to pay a royalty every time there’s an inference around the data, and users can no longer download their original music.
“We’ll see what happens with this sort of question around ethics, but also law,” Wyner said.
“I think that the [U.S.] Copyright Office is way behind on this,” Wyner said. “I think legislation is ultimately probably going to end up trying to do something that conforms to what people are already doing, as opposed to shutting the whole phenomenon down.”
Wyner warned that there is the possibility for creating deepfakes — songs designed to sound like they were done by existing, real-life artists — and hopes that some safeguards are put in place.
Wyner also is skeptical if people will ever fall in love with AI artists, saying fans have relationships with artists based on their individual (and human) identities.
“There might be a wow factor initially, or a one-off,” said Wyner. “But you know AI artists don’t have careers at this point, and haven’t made 17 records and sold a billion copies. It’s more ephemeral.”
Lastly, Wyner says it his job to educate people about the use of AI in the music industry. He encourages people to go out and try it and make independent decisions as to whether they want to use it or not.
But, Wyner said, “I think putting our heads in the sand and wishing it would go away is probably not a great strategy for success.”
Is AI music legit? Take the poll below, or email [email protected], and let us know what you think. Your response may be published in a future article.
Beth Treffeisen is a general assignment reporter for Boston.com, focusing on local news, crime, and business in the New England region.
Get everything you need to know to start your day, delivered right to your inbox every morning.
Stay up to date with everything Boston. Receive the latest news and breaking updates, straight from our newsroom to your inbox.
To comment, please create a screen name in your profile
To comment, please verify your email address
Conversation
This discussion has ended. Please join elsewhere on Boston.com