Listeners engage less deeply with music attributed to AI than with music attributed to a human – even when the music is actually human-composed, a new peer-reviewed academic study has found.
The authors say their findings suggest “truthful attribution can have real consequences for how music is perceived and understood” – landing in the middle of an active industry debate over mandatory AI music disclosure.
The peer-reviewed paper, which you can read in full here, published in the journal Cognitive Research: Principles and Implications in March, was authored by Sarah H. Wu of Stanford University and Kevin J. Holmes of Reed College.
Various music streaming services have been rolling out their own AI labeling systems in recent months.
Apple Music launched its Transparency Tags system in March, asking labels and distributors to flag AI use at the point of delivery.
Spotify followed last month with the beta launch of AI Credits in song credits, similarly relying on labels and distributors to self-disclose AI use.
In late April, Spotify went further by introducing a new “Verified by Spotify” badge, with the streaming giant saying that profiles “that appear to primarily represent AI-generated or AI-persona artists” would not be eligible for verification.
“In the AI era, it’s more important than ever to be able to trust the authenticity of the music you listen to,” Spotify said in an accompanying blog post.
Deezer has gone further still – independently detecting and tagging AI music at the platform level, and now reporting that around 75,000 fully AI-generated tracks are uploaded to its service every day, making up roughly 44% of daily deliveries.
Earlier this month, music supervisor Frederic Schindler made the case in an MBW op-ed for an industry-wide “Music Facts” disclosure protocol modeled on FDA nutrition labels, with “AI Generated” listed as one of four mandatory origin categories.
The Wu and Holmes paper was based on two preregistered studies involving 399 US participants, who listened to instrumental music clips and reported whether they imagined a story while listening – a phenomenon the researchers describe as “narrative listening”.
In the first study, participants heard six human-composed pieces – including works by Beethoven, Mozart, Debussy and Ravel – without being told who or what had composed them.
The more strongly listeners believed a given piece was computer-generated, the less likely they were to imagine a story – and the less engaging the stories they did imagine were, according to Wu and Holmes.
“FALSELY FRAMING AI-GENERATED ART AS A HUMAN CREATION MAY ELICIT A GREATER SENSE OF MEANING – BUT AT A COST TO HUMAN CREATORS, DEPRIVING THEM OF CREDIT AND COMPENSATION FOR THE WORK FROM WHICH AI PRODUCTS ARE DERIVED.”
SARAH H. WU AND KEVIN J. HOLMES
In the second study, the researchers used eight pieces – four human-composed, four generated by AI software AIVA – and labeled each one either “Composer: Human” or “Composer: AI” while it played to the participant.
The “AI”-labeled pieces elicited fewer and less engaging imagined narratives than the “Human”-labeled pieces – regardless of who or what had actually composed the music, the Stanford and Reed researchers reported.
That suppression was nominally stronger when applied to actual AI compositions – suggesting listeners were also picking up on acoustic markers of AI on top of the labels themselves, according to Wu and Holmes.
“Attributing music to AI is associated with – and can engender – an impoverished listening experience, devoid of the mental narratives that unfold as the composer’s musical choices guide the listener’s imagination,” Wu and Holmes wrote.
The authors said the label effect appeared to be driven by listeners ascribing less communicative intention to pieces marked as AI-made. “Labeling music as AI composed, truthfully or otherwise, may lead listeners to infer that the music lacks meaning or intensity,” Wu and Holmes wrote.
“Attributing music to AI is associated with – and can engender – an impoverished listening experience, devoid of the mental narratives that unfold as the composer’s musical choices guide the listener’s imagination.”
SARAH H. WU AND KEVIN J. HOLMES
Those findings land alongside parallel research from Kiel and Hamburg economists Jana Friedrichsen, Julia Schwarz, and Michel Clement.
Their three-study working paper, not yet peer-reviewed, was summarized in ProMarket last Monday (May 4), and found that listeners’ willingness to pay for AI-generated music drops when its AI origin is disclosed – an effect mainly driven by pop listeners.
“Consumers can only make informed choices if artists and music platforms are transparent about the use of AI,” Friedrichsen, Schwarz and Clement wrote.
The Wu and Holmes paper opens with a reference to The Velvet Sundown, the “band” that surpassed 1 million monthly Spotify listeners in 2025 before its operators confirmed the music was AI-generated.
The authors describe one of The Velvet Sundown‘s songs as making “for an adequate road-trip soundtrack,” adding: “Much AI-generated music may go undetected because it is designed to blur the distinctive qualities of human-composed works, yielding a kind of algorithmically curated easy listening.”
For Wu and Holmes, that under-detection comes at a cost to human creators.
“Even though AI systems can produce works that look or sound impressive, audiences may engage with them in a rather shallow way, missing the human touch that makes art feel meaningful,” Wu and Holmes wrote.
Wu and Holmes added: “By the same token, falsely framing AI-generated art as a human creation may elicit a greater sense of meaning – but at a cost to human creators, depriving them of credit and compensation for the work from which AI products are derived.”
That risk to human creators was illustrated earlier this year by the case of Murphy Campbell, a North Carolina folk musician who discovered AI-cloned covers of her songs uploaded to her own Spotify profile, before a copyright troll claimed ownership of her legitimate YouTube recordings via gamma-owned distributor Vydia.
The scale of that harm extends far beyond Campbell‘s case. Sony Music Entertainment revealed at the launch of the IFPI‘s Global Music Report 2026 in March that it had asked streaming platforms to take down more than 135,000 songs created by fraudsters using generative AI to impersonate its artists.
Dennis Kooker, Sony Music‘s President of Global Digital Business and US Sales, said the deepfakes cause “direct commercial harm to legitimate recording artists.”
In a February MBW op-ed, IFPI CEO Victoria Oakley and RIAA CEO Mitch Glazier wrote that generative AI has “industrialized” streaming fraud.
Performance rights organization ASCAP, meanwhile, has been calling for transparency around AI use in musical works since 2023, when its board adopted six AI principles including a call to distinguish AI from human-generated works.
Concluding the paper, Wu and Holmes wrote: “For music to inspire our inner storyteller, it helps to know there’s a human mind behind it.”Music Business Worldwide

















.jpg)





