--:--
Back to blog

Music & Audio

3 articles

AI music fraud sees fraudsters producing 60,000 AI tracks daily on major platforms—outpacing the entire U.S. industry's 2015 output of 57,000 songs. Bots then fake 85% of streams for this content, diverting 8-9% of global streams or $2-3 billion in royalties from real artists. Platforms like Spotify remove millions of suspicious tracks and use AI detectors for unnatural patterns, but the arms race continues. Musicians face diluted earnings and buried visibility, calling for labeling standards and royalty reforms to protect human creativity amid AI's rise.

Music & Audio

Bandcamp has banned all AI-generated music, relying on community flagging to protect human artistry. The Grammy Awards exclude tracks without human authorship, as seen in the Beatles' AI-assisted "Now and Then" win. US Copyright denies protection to fully AI works but allows human elements. Platforms vary: TikTok and Apple Music pull problematic tracks like HAVEN's "I Run," Spotify purges low-quality AI slop but keeps others, and Billboard disqualifies amid controversies. Deezer mandates labeling, while Sweden's charts ban mainly AI songs. These policies highlight tensions between innovation and authenticity, with calls for better transparency to inform listeners.

Music & Audio

will.i.am compares AI in music to 1970s sampling debates, predicting it'll evolve from today's "slop"—generic outputs—to sophisticated, autonomous creation in 20 years. He highlights ethical issues: AI trains on human music without compensation, echoing cleared samples in hip-hop. As an ASU professor, he teaches students to build personal AI agents using GPUs, emphasizing data ownership like a digital bank account. In a fragmented streaming era, where TikTok virality trumps communal hits, live improv becomes key to authenticity against deepfakes. His career, from Black Eyed Peas to solo work, shows tech amplifies creativity if regulated right.

Music & Audio