Music stem generation, the task of producing musically-synchronized and isolated instrument audio clips, offers the potential of greater user control and better alignment with musician workflows compared to conventional text-to-music models. Existing stem generation approaches, however, either rely on fixed architectures that output a predefined set of stems in parallel, or generate only one stem at a time, resulting in slow inference despite flexibility in stem combination. We propose Stemphonic, a diffusion-/flow-based framework that overcomes this trade-off and generates a variable set of synchronized stems in one inference pass. During training, we treat each stem as a batch element, group synchronized stems in a batch, and apply a shared noise latent to each group. At inference-time, we use a shared initial noise latent and stem-specific text inputs to generate synchronized multi-stem outputs in one pass. We further expand our approach to enable one-pass conditional multi-stem generation and stem-wise activity controls to empower users to iteratively generate and orchestrate the temporal layering of a mix. We benchmark our results on multiple open-source stem evaluation sets and show that Stemphonic produces higher-quality outputs while accelerating the full mix generation process by 25 to 50%. Demos at: https://stemphonic-demo.vercel.app.
翻译:音乐音轨生成任务旨在生成音乐同步且分离的乐器音频片段,相较于传统的文本到音乐模型,该任务具有更高的用户控制潜力,并能更好地与音乐家工作流程对齐。然而,现有的音轨生成方法要么依赖于固定架构,并行输出预定义的一组音轨;要么一次仅生成一个音轨,尽管在音轨组合上具有灵活性,但推理速度缓慢。我们提出了Stemphonic,一个基于扩散/流的框架,克服了这种权衡,能够在一次推理过程中生成一组可变且同步的音轨。在训练期间,我们将每个音轨视为一个批次元素,将同步的音轨分组到一个批次中,并对每组应用共享的噪声隐变量。在推理时,我们使用共享的初始噪声隐变量和音轨特定的文本输入,以一次性生成同步的多音轨输出。我们进一步扩展了该方法,以实现一次性条件多音轨生成和音轨活动控制,从而使用户能够迭代生成并编排混音的时间分层。我们在多个开源音轨评估集上对结果进行了基准测试,结果表明,Stemphonic能够生成更高质量的输出,同时将完整混音生成过程加速25%至50%。演示地址:https://stemphonic-demo.vercel.app。