Current audio foundation models typically rely on rigid, task-specific supervision, addressing isolated factors of audio rather than the whole. In contrast, human intelligence processes audio holistically, seamlessly bridging physical signals with abstract cognitive concepts to execute complex tasks. Grounded in this philosophy, we introduce Bagpiper, an 8B audio foundation model that interprets physical audio via rich captions, i.e., comprehensive natural language descriptions that encapsulate the critical cognitive concepts inherent in the signal (e.g., transcription, audio events). By pre-training on a massive corpus of 600B tokens, the model establishes a robust bidirectional mapping between raw audio and this high-level conceptual space. During fine-tuning, Bagpiper adopts a caption-then-process workflow, simulating an intermediate cognitive reasoning step to solve diverse tasks without task-specific priors. Experimentally, Bagpiper outperforms Qwen-2.5-Omni on MMAU and AIRBench for audio understanding and surpasses CosyVoice3 and TangoFlux in generation quality, capable of synthesizing arbitrary compositions of speech, music, and sound effects. To the best of our knowledge, Bagpiper is among the first works that achieve unified understanding generation for general audio. Model, data, and code are available at Bagpiper Home Page.
翻译:当前的音频基础模型通常依赖于僵化的、任务特定的监督,处理音频的孤立因素而非整体。相比之下,人类智能以整体方式处理音频,无缝地将物理信号与抽象认知概念相连接以执行复杂任务。基于这一理念,我们提出了Bagpiper,一个80亿参数的音频基础模型,它通过丰富字幕(即全面描述信号中固有关键认知概念的自然语言描述,例如转录、音频事件)来解读物理音频。通过在6000亿标记的大规模语料库上进行预训练,该模型在原始音频与这一高层概念空间之间建立了稳健的双向映射。在微调过程中,Bagpiper采用“先字幕后处理”的工作流程,模拟一个中间认知推理步骤,从而无需任务特定先验即可解决多样化任务。实验表明,Bagpiper在音频理解任务上(MMAU和AIRBench)超越了Qwen-2.5-Omni,并在生成质量上优于CosyVoice3和TangoFlux,能够合成语音、音乐和音效的任意组合。据我们所知,Bagpiper是首批实现通用音频统一理解与生成的工作之一。模型、数据和代码可在Bagpiper主页获取。