Generating music with coherent structure, harmonious instrumental and vocal elements remains a significant challenge in song generation. Existing language models and diffusion-based methods often struggle to balance global coherence with local fidelity, resulting in outputs that lack musicality or suffer from incoherent progression and mismatched lyrics. This paper introduces $\textbf{SongBloom}$, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms. Audio samples are available on our demo page: https://cypress-yang.github.io/SongBloom_demo. The code and model weights have been released on https://github.com/Cypress-Yang/SongBloom .
翻译:生成具有连贯结构、和谐器乐与人声元素的音乐仍是歌曲生成领域的重要挑战。现有语言模型与基于扩散的方法往往难以平衡全局连贯性与局部保真度,导致生成结果缺乏音乐性,或存在进展不连贯与歌词不匹配的问题。本文提出$\textbf{SongBloom}$,一种用于全长歌曲生成的新型框架,其采用自回归草图绘制与基于扩散的精炼交错范式。SongBloom采用自回归扩散模型,结合了扩散模型的高保真度与语言模型的可扩展性。具体而言,它逐步将音乐草图从短到长延伸,并将细节从粗粒度到细粒度精炼。这种交错生成范式有效整合先前的语义与声学上下文以指导生成过程。实验结果表明,SongBloom在主观与客观指标上均优于现有方法,并达到与最先进商业音乐生成平台相当的性能。音频样本可在我们的演示页面获取:https://cypress-yang.github.io/SongBloom_demo。代码与模型权重已发布于https://github.com/Cypress-Yang/SongBloom。