With rapid advances in generative artificial intelligence, the text-to-music synthesis task has emerged as a promising direction for music generation. Nevertheless, achieving precise control over multi-track generation remains an open challenge. While existing models excel in directly generating multi-track mix, their limitations become evident when it comes to composing individual tracks and integrating them in a controllable manner. This departure from the typical workflows of professional composers hinders the ability to refine details in specific tracks. To address this gap, we propose JEN-1 Composer, a unified framework designed to efficiently model marginal, conditional, and joint distributions over multi-track music using a single model. Building upon an audio latent diffusion model, JEN-1 Composer extends the versatility of multi-track music generation. We introduce a progressive curriculum training strategy, which gradually escalates the difficulty of training tasks while ensuring the model's generalization ability and facilitating smooth transitions between different scenarios. During inference, users can iteratively generate and select music tracks, thus incrementally composing entire musical pieces in accordance with the Human-AI co-composition workflow. Our approach demonstrates state-of-the-art performance in controllable and high-fidelity multi-track music synthesis, marking a significant advancement in interactive AI-assisted music creation. Our demo pages are available at www.jenmusic.ai/research.
翻译:随着生成式人工智能的快速发展,文本到音乐合成任务已成为音乐生成的一个有前景的方向。然而,实现对多轨生成的精确控制仍然是一个开放的挑战。虽然现有模型在直接生成多轨混音方面表现出色,但在以可控方式创作独立音轨并将其整合时,其局限性变得明显。这与专业作曲家典型工作流程的偏离,阻碍了对特定音轨细节进行细化的能力。为弥补这一差距,我们提出了JEN-1 Composer,一个旨在使用单一模型高效建模多轨音乐的边缘分布、条件分布和联合分布的统一框架。基于音频潜在扩散模型,JEN-1 Composer扩展了多轨音乐生成的通用性。我们引入了一种渐进式课程训练策略,该策略逐步提升训练任务的难度,同时确保模型的泛化能力,并促进不同场景之间的平滑过渡。在推理过程中,用户可以迭代生成和选择音乐音轨,从而根据人机协同创作工作流程逐步构建完整的音乐作品。我们的方法在可控和高保真的多轨音乐合成中展示了最先进的性能,标志着交互式人工智能辅助音乐创作的重大进展。我们的演示页面可在 www.jenmusic.ai/research 获取。