Recent advances in text-to-music generation models have opened new avenues in musical creativity. However, music generation usually involves iterative refinements, and how to edit the generated music remains a significant challenge. This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged. Our method transforms text editing to \textit{latent space manipulation} while adding an extra constraint to enforce consistency. It seamlessly integrates with existing pretrained text-to-music diffusion models without requiring additional training. Experimental results demonstrate superior performance over both zero-shot and certain supervised baselines in style and timbre transfer evaluations. Additionally, we showcase the practical applicability of our approach in real-world music editing scenarios.
翻译:近期文本到音乐生成模型的进展为音乐创作开辟了新途径。然而,音乐生成通常需要迭代式精炼,如何对已生成音乐进行编辑仍是一项重大挑战。本文提出了一种新颖的音乐编辑方法,可在保持其他属性不变的前提下,修改特定属性(如流派、情绪和乐器)。该方法将文本编辑转化为潜在空间操控,并通过额外约束条件增强一致性。它能够无缝集成现有预训练文本到音乐扩散模型,无需额外训练。实验结果表明,在风格与音色迁移评估中,本方法不仅超越零样本基线,更优于部分监督式基线模型。此外,我们展示了该方法在真实音乐编辑场景中的实践应用价值。