Deepfake Music Raises Alarm as Sony Moves to Remove 135,000 Fake Songs

0
25
Sony Remove Deepfake Music

Sony Music has warned about the growing spread of deepfake music. The company revealed that it has requested the removal of more than 135,000 fake songs from streaming platforms. These tracks, created using artificial intelligence, imitate the voices and styles of major artists. They divert earnings away from legitimate creators.

The issue impacts some of the biggest names in music, including Beyoncé, Harry Styles, and Queen. According to Sony, fraudsters use AI tools to clone vocals and release songs that appear authentic. They often time releases to coincide with official album promotions.

Dennis Kooker, president of Sony’s global digital business, explained that this activity causes direct financial losses. He added that fake releases can disrupt marketing campaigns. Fraudulent tracks often appear when an artist is already gaining attention, making it harder for fans to distinguish real work from imitation.

Sony estimates that identified tracks represent only a fraction of what exists online. Since March last year, the company flagged about 60,000 new fake songs linked to its artists. Others believed to be targeted include Bad Bunny, Miley Cyrus, and Mark Ronson.

Industry Growth Meets Rising AI Challenges

The disclosure came during the launch of the Global Music Report in London, which showed continued financial growth. According to the International Federation of the Phonographic Industry, global recorded music revenue rose 6.4 percent last year, reaching 31.7 billion dollars. This marks the 11th consecutive year of growth, largely driven by streaming subscriptions.

Despite this positive trend, executives say deepfake music and other forms of streaming manipulation are becoming more common. Fraudsters not only create fake songs but also inflate play counts to earn royalties. Industry estimates suggest that up to 10 percent of content on streaming platforms may be fraudulent.

The rise of artificial intelligence has made it easier and cheaper to produce convincing imitations. This intensifies challenges for record labels and streaming companies. Meanwhile, governments are still determining how to regulate the technology without slowing innovation.

At the London event, industry leaders reacted to a recent UK government report on AI regulation. Officials decided to reconsider plans that would have allowed AI companies to train systems on copyrighted material without permission. The announcement sparked cautious optimism.

Victoria Oakley, CEO of the IFPI, said policymakers are trying to balance protecting creativity with encouraging technological progress. She expressed hope that further discussions will produce fairer rules for artists and rights holders.

Calls are also growing for clearer labeling of AI-generated content. Executives argue that listeners should know whether a track is human-made or machine-generated. Without transparency, trust in digital music platforms could weaken.

Kooker cited the French streaming service Deezer as an early example of action. The company introduced tools to detect AI-generated songs and reported that about 34 percent of new uploads fall into that category. While not perfect, he said these systems provide a starting point for addressing the problem.

Sony and other industry players believe stronger detection methods and clear labeling will be crucial to protecting artists. As deepfake music grows, they warn that maintaining trust and fairness in the streaming economy will depend on how quickly platforms adapt.

Leave a reply