Cover songs constitute a vital aspect of musical culture, preserving the core melody of an original composition while reinterpreting it to infuse novel emotional depth and thematic emphasis. Although prior research has explored the reinterpretation of instrumental music through melody-conditioned text-to-music models, the task of cover song generation remains largely unaddressed. In this work, we reformulate our cover song generation as a conditional generation, which simultaneously generates new vocals and accompaniment conditioned on the original vocal melody and text prompts. To this end, we present SongEcho, which leverages Instance-Adaptive Element-wise Linear Modulation (IA-EiLM), a framework that incorporates controllable generation by improving both conditioning injection mechanism and conditional representation. To enhance the conditioning injection mechanism, we extend Feature-wise Linear Modulation (FiLM) to an Element-wise Linear Modulation (EiLM), to facilitate precise temporal alignment in melody control. For conditional representations, we propose Instance-Adaptive Condition Refinement (IACR), which refines conditioning features by interacting with the hidden states of the generative model, yielding instance-adaptive conditioning. Additionally, to address the scarcity of large-scale, open-source full-song datasets, we construct Suno70k, a high-quality AI song dataset enriched with comprehensive annotations. Experimental results across multiple datasets demonstrate that our approach generates superior cover songs compared to existing methods, while requiring fewer than 30% of the trainable parameters. The code, dataset, and demos are available at https://github.com/lsfhuihuiff/SongEcho_ICLR2026.
翻译:翻唱歌曲是音乐文化的重要组成部分,它在保留原曲核心旋律的同时进行重新演绎,以注入新的情感深度与主题侧重。尽管先前研究已探索通过旋律条件文本到音乐模型对器乐进行重新诠释,但翻唱歌曲生成任务在很大程度上仍未得到解决。在本工作中,我们将翻唱歌曲生成重新定义为条件生成任务,即同时生成新的歌声与伴奏,其条件为原始人声旋律与文本提示。为此,我们提出SongEcho,该模型利用实例自适应逐元素线性调制(IA-EiLM)框架,通过改进条件注入机制与条件表示来实现可控生成。为增强条件注入机制,我们将特征线性调制(FiLM)扩展为逐元素线性调制(EiLM),以促进旋律控制中精确的时间对齐。针对条件表示,我们提出实例自适应条件细化(IACR),该方法通过与生成模型的隐藏状态交互来细化条件特征,从而产生实例自适应的条件。此外,为解决大规模开源全歌曲数据集的稀缺问题,我们构建了Suno70k——一个富含全面标注的高质量AI歌曲数据集。在多个数据集上的实验结果表明,与现有方法相比,我们的方法能够生成更优的翻唱歌曲,同时所需可训练参数少于30%。代码、数据集及演示可在https://github.com/lsfhuihuiff/SongEcho_ICLR2026 获取。