Published on the 22/08/2024 | Written by Heather Wright
AI eating musicians lunch…
Kiwi and Australian music creators will likely be out of pocket to the tune AU$519 million/NZ$570 million by 2028, as generative AI takes a bite out of their revenues and they’re calling on local governments to take action now.
But they’re also open that they themselves are embracing AI technology and believe AI caassist the human creative process.
Research conducted by Berlin consultation and research group Goldmedia for music rights management organisation APRA AMCOS says by 2028, 23 percent of music creators revenues will be at risk due to GenAI. That amounts to potential damage of AU$227 million in 2028 alone, and cumulative total damage of more than half a billion between 2024 and 2028.
“The issue lies not with the technology itself, but in the secretive corporate practices.”
Nearly 4,300 songwriters, composers and music publishers across Australia and New Zealand responded to the survey.
The total global market for generative AI was estimated at AU$5.4 billion in revenue in 2023, with music applications accounting for $430 million, or eight percent of the total market.
However, the music market is expected to grow more than tenfold by 2028, to over AU$4.5 billion, which corresponds to 28 percent of the global music copyright collections in 2022.
Dean Ormston, APRA AMCOS chief executive, says the issue lies not with the technology itself, but in the secretive corporate practices that erode trust within the global creative sector.
“Creators invest significant time and effort into their work, yet their intellectual property is exploited by AI platforms without credit, consent or compensation. This unauthorised use poses a serious threat to the economic and cultural landscape, potentially damaging careers and businesses, including those of First Nations creators,” he says.
“For a generative AI market to be fair, equitable and sustainable, it must rest on a solid regulatory foundation that upholds the rights of human creators and protects their intellectual property.
“Transparency is crucial to this process.”
The training of GenAI models on data from vast swathes of the internet has been an increasing bone of contention, not just for the music sector.
There, Spotify has faced complaints about AI-generated music on the platform, with alleged AI-generated bands like Jet Fuel & Ginger Ales – a Spotify ‘verified artist’ – generating hundreds of thousands of listens, eating into the royalty pool available to human musicians. Spotify chief executive Daniel Ek has said in the past that the company has no plans to ban AI music.
Ed Newton-Rex, who resigned from his role leading Stability AI’s audio team last year over concerns about training generative AI on copyrighted works – something he says is not ‘fair use’ – has slammed Spotify for actively recommending AI music made by AI models that may be trained on copyrighted music without permission.
Newton-Rex, who is himself a composer, is now the chief executive of Fairly Trained, an organisation which ‘certifies’ AI companies who don’t use any copyrighted work without a license.
He says Spotify should ban any music made using AI models trained on copyrighted music without permission as the models are training on people’s music to compete with them, and should clearly label any AI music it does offer.
“There’s not only a clear ethical case for this, but a business case too. Platforms that fill themselves with copyright-laundering AI slop will become irrelevant,” he said on X.
“Generative AI competes with the music it’s trained on. It must be required to license that music,” he says.
Artists, of the graphical kind, have already sued generative AI art generators. Earlier this month a US federal court judge declined to dismiss copyright infringement claims against AI companies.
And authors have also taken action: Anthropic has been sued in a class action lawsuit in California by three authors saying it misused their books and hundreds of thousands of others to train the Claude chatbot. The company is also facing action of alleged misuse of copyrighted song lyrics.
They’re among a number of actions filed by copyright holders over the rights of tech companies to use copyrighted information, without any payments, to train models – often training them to compete directly with the copyright holders themselves. OpenAI, Meta and Stability AI are all facing lawsuits.
And it’s not just the use of copyrighted content – much of the searchable internet has been used as the training ground for genAI, raising questions around the privacy of data shared online – and potential bias.
The APRA AMCOS report, AI + Music in Australia and New Zealand, found 82 percent of music creators across A/NZ are concerned that the use of AI in music could mean they can no longer make a living from their work.
“Despite the fact that copyrighted music is used as training data for GenAI models and therefore forms a fundamental basis for the origin and development of the market, music creators do not participate in the immense growth prospects,” the report says.
“So far, there is no remuneration system that closes the AI-generated financial gap for music creators.”
Peter Garrett, former Midnight Oil frontman and a former Australian minister for the environment and water, says without robust laws to ensure copyright holders are adequately remunerated, licenses applied and transparency around the actual processes used ‘when a creator’s work is exploited, then we are in deep trouble’.
Ninety-six percent of respondents to the survey called for AI providers to be obliged to disclose when they use copyrighted works as training data, with 93 percent wanting AI-generated music and other types of work to be identified as such and 95 percent wanting it to be a requirement that copyright holders are asked for permission before their work is used in AI systems.
“The overwhelming majority of music creators demand credit, transparency, consent and fair remuneration when their work is used in any context of GenAI in music. They stress the importance of clear rules and regulations for the use of copyrighted works.”
But the report also notes that the music creators aren’t anti-AI in general. It found 38 percent of those surveyed have used AI in their work with music and creation in general, with five percent saying they use it ‘always or almost always’ and the remaining 33 percent saying they use it on a regular basis. Interestingly, the figure is even higher among those aged between 45 and 54, at 45 percent using AI.
Just 27 percent say they refuse to use AI.
Among those using AI, it seems to have proven successful, with 71 percent saying they plan to continue using it in future. Just four percent overall believe AI is ‘very positive’ for the music sector, with 37 percent saying it had both positives and negatives, and 32 percent viewing it as ‘very negative’.
APRA AMCOS says it started briefing key Australian ministerial offices and department officials on the findings prior to publishing the report and will do the same in New Zealand.
“We need to ensure there is an urgency to finding a regulatory solution in both territories that can support the Australian and New Zealand music ecosystem,” APRA says.
“Australia and New Zealand have the chance to lead globally in ensuring the creative sector benefits from the projected wealth generation of generative AI,” Ormston adds.
Pretty interesting can of worms. All music is derivative. And the AI bands? Behind them are humans, so the royalties are going to different creators, using different tools. Is that so bad? I suppose it is, if your tool is a guitar, and their tool is software. Anyway. The lurch of progress, and all that.