We present a family of open-source Music Foundation Models designed to advance large-scale music understanding and generation across diverse tasks and modalities. Our framework consists of four major components: (1) HeartCLAP, an audio-text alignment model; (2) HeartTranscriptor, a robust lyric recognition model optimized for real-world music scenarios; and (3) HeartCodec, a low-frame-rate (12.5 Hz) yet high-fidelity music codec tokenizer that captures long-range musical structure while preserving fine-grained acoustic details and enabling efficient autoregressive modeling; (4) HeartMuLa, an LLM-based song generation model capable of synthesizing high-fidelity music under rich, user-controllable conditions (e.g., textual style descriptions, lyrics, and reference audio). In addition, it provides two specialized modes: (i) fine-grained musical attribute control, which allows users to specify the style of different song sections (e.g., intro, verse, chorus) using natural language prompts; and (ii) short, engaging music generation, which is suitable as background music for short videos. Lastly, HeartMuLa improves significantly when scaled to 7B parameters. For the first time, we show that a Suno-level, commercial-grade system can be reproduced using academic-scale data and GPU resources. We expect these foundation models to serve as strong baselines for future research and to facilitate practical applications in multimodal content production.
翻译:我们提出了一系列开源音乐基础模型,旨在推进跨多种任务与模态的大规模音乐理解与生成。我们的框架包含四个主要组件:(1) HeartCLAP,一个音频-文本对齐模型;(2) HeartTranscriptor,一个针对现实音乐场景优化的鲁棒歌词识别模型;(3) HeartCodec,一种低帧率(12.5 Hz)但高保真的音乐编解码器分词器,它能够捕捉长程音乐结构,同时保留细粒度的声学细节并支持高效的自回归建模;(4) HeartMuLa,一个基于LLM的歌曲生成模型,能够在丰富且用户可控的条件下(例如文本风格描述、歌词和参考音频)合成高保真音乐。此外,它提供两种专门模式:(i) 细粒度音乐属性控制,允许用户通过自然语言提示指定歌曲不同段落(如前奏、主歌、副歌)的风格;(ii) 简短且富有吸引力的音乐生成,适合作为短视频的背景音乐。最后,当参数规模扩展至70亿时,HeartMuLa的性能显著提升。我们首次证明,使用学术规模的数据和GPU资源即可复现Suno级别的商业级系统。我们期望这些基础模型能够为未来研究提供强有力的基准,并促进多模态内容生产中的实际应用。