In the realm of music AI, arranging rich and structured multi-track accompaniments from a simple lead sheet presents significant challenges. Such challenges include maintaining track cohesion, ensuring long-term coherence, and optimizing computational efficiency. In this paper, we introduce a novel system that leverages prior modelling over disentangled style factors to address these challenges. Our method presents a two-stage process: initially, a piano arrangement is derived from the lead sheet by retrieving piano texture styles; subsequently, a multi-track orchestration is generated by infusing orchestral function styles into the piano arrangement. Our key design is the use of vector quantization and a unique multi-stream Transformer to model the long-term flow of the orchestration style, which enables flexible, controllable, and structured music generation. Experiments show that by factorizing the arrangement task into interpretable sub-stages, our approach enhances generative capacity while improving efficiency. Additionally, our system supports a variety of music genres and provides style control at different composition hierarchies. We further show that our system achieves superior coherence, structure, and overall arrangement quality compared to existing baselines.
翻译:在音乐人工智能领域,从简单的旋律谱出发编排丰富且结构化的多声部伴奏面临着重大挑战。这些挑战包括保持声部间的内聚性、确保长期连贯性以及优化计算效率。本文提出一种新颖系统,利用解耦风格因子上的先验建模来解决这些挑战。我们的方法采用两阶段流程:首先通过检索钢琴织体风格从旋律谱生成钢琴编排;随后通过向钢琴编排注入管弦乐功能风格来生成多声部配器。我们的核心设计是使用向量量化和独特的**多流Transformer**来建模配器风格的长期流动,从而实现灵活、可控且结构化的音乐生成。实验表明,通过将编排任务分解为可解释的子阶段,我们的方法在提升生成能力的同时提高了效率。此外,本系统支持多种音乐流派,并能在不同作曲层级提供风格控制。我们进一步证明,与现有基线相比,本系统在连贯性、结构性和整体编排质量方面均表现出优越性。