Recent advances in text-to-music generation (TTM) have yielded high-quality results, but often at the cost of extensive compute and the use of large proprietary internal data. To improve the affordability and openness of TTM training, an open-source generative model backbone that is more training- and data-efficient is needed. In this paper, we constrain the number of trainable parameters in the generative model to match that of the MusicGen-small benchmark (with about 300M parameters), and replace its Transformer backbone with the emerging class of state-space models (SSMs). Specifically, we explore different SSM variants for sequence modeling, and compare a single-stage SSM-based design with a decomposable two-stage SSM/diffusion hybrid design. All proposed models are trained from scratch on a purely public dataset comprising 457 hours of CC-licensed music, ensuring full openness. Our experimental findings are three-fold. First, we show that SSMs exhibit superior training efficiency compared to the Transformer counterpart. Second, despite using only 9% of the FLOPs and 2% of the training data size compared to the MusicGen-small benchmark, our model achieves competitive performance in both objective metrics and subjective listening tests based on MusicCaps captions. Finally, our scaling-down experiment demonstrates that SSMs can maintain competitive performance relative to the Transformer baseline even at the same training budget (measured in iterations), when the model size is reduced to four times smaller. To facilitate the democratization of TTM research, the processed captions, model checkpoints, and source code are available on GitHub via the project page: https://lonian6.github.io/ssmttm/.
翻译:近期文本到音乐生成技术取得了高质量成果,但通常以大量计算资源消耗和使用大型专有内部数据为代价。为提高文本到音乐生成训练的经济性与开放性,亟需一种更具训练效率和数据效率的开源生成模型主干架构。本文通过约束生成模型的可训练参数量以匹配MusicGen-small基准(约3亿参数),并将其Transformer主干替换为新兴的状态空间模型类别。具体而言,我们探索了多种适用于序列建模的状态空间模型变体,并比较了基于单阶段状态空间模型的设计与可分解的双阶段状态空间模型/扩散混合设计。所有提出的模型均在完全公开的数据集上从头开始训练,该数据集包含457小时知识共享许可音乐,确保完全开放性。实验发现主要体现在三个方面:首先,我们证明状态空间模型相较于Transformer对应模型展现出更优越的训练效率;其次,尽管仅使用MusicGen-small基准9%的浮点运算量和2%的训练数据规模,我们的模型在基于MusicCaps描述的客观指标和主观听感测试中均表现出竞争力;最后,缩放实验表明,当模型规模缩小至四分之一时,在相同训练预算(以迭代次数衡量)下,状态空间模型仍能保持相对于Transformer基线的竞争性能。为促进文本到音乐生成研究的普及,经处理的描述文本、模型检查点及源代码已通过项目页面在GitHub发布:https://lonian6.github.io/ssmttm/。