Generating music with coherent structure, harmonious instrumental and vocal elements remains a significant challenge in song generation. Existing language models and diffusion-based methods often struggle to balance global coherence with local fidelity, resulting in outputs that lack musicality or suffer from incoherent progression and mismatched lyrics. This paper introduces $\textbf{SongBloom}$, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms. Audio samples are available on our demo page: https://cypress-yang.github.io/SongBloom_demo. The code and model weights have been released on https://github.com/Cypress-Yang/SongBloom .
翻译:生成具有连贯结构、和谐器乐与人声元素的音乐仍然是歌曲生成领域的一项重大挑战。现有的语言模型和基于扩散的方法往往难以平衡全局连贯性与局部保真度,导致生成结果缺乏音乐性,或存在进展不连贯、歌词不匹配等问题。本文提出 $\textbf{SongBloom}$,一种用于生成长篇歌曲的新型框架,它利用交错的自回归草稿生成与基于扩散的细化范式。SongBloom 采用了一种自回归扩散模型,该模型结合了扩散模型的高保真度与语言模型的可扩展性。具体而言,它从短到长逐步扩展音乐草稿,并从粗到细地细化细节。这种交错生成范式有效地整合了先前的语义与声学上下文,以指导生成过程。实验结果表明,SongBloom 在主观和客观指标上均优于现有方法,并达到了与最先进的商业音乐生成平台相当的性能。音频样本可在我们的演示页面获取:https://cypress-yang.github.io/SongBloom_demo。代码与模型权重已在 https://github.com/Cypress-Yang/SongBloom 上发布。