Large Audio Language Models struggle to disentangle overlapping events in complex acoustic scenes, yielding temporally inconsistent captions and frequent hallucinations. We introduce Timestamped Audio Captioner (TAC), a model that produces temporally grounded audio descriptions at varying degrees of detail and resolution. TAC is trained with a synthetic data pipeline that constructs challenging and dynamic mixtures from real-world audio sources, enabling robust learning under realistic polyphonic conditions. Across event detection and dense captioning, TAC outperforms all competing methods, with a low hallucination rate and accurate temporal grounding. We also introduce TAC-V, an audio-visual pipeline to generate semantically rich audio-visual descriptions. We then show that TAC and TAC-V serves as a "semantic bridge" for a text-only reasoner: a simple TAC$\rightarrow$LLM and TAC-V$\rightarrow$LLM cascade achieves state-of-the-art scores on benchmarks for both audio (MMAU-Pro, MMSU, MMAR) and audio-visual (DailyOmni, VideoHolmes) understanding and reasoning respectively.
翻译:大型音频语言模型在处理复杂声学场景中的重叠事件时存在困难,常产生时间不一致的描述和频繁的幻觉现象。本文提出带时间戳的音频描述生成模型(TAC),该模型能够生成具有不同细节程度与时间分辨率的时序锚定音频描述。TAC通过合成数据流水线进行训练,该流水线从真实世界音频源构建具有挑战性的动态混合音频,从而在现实多声部条件下实现鲁棒学习。在事件检测和密集描述任务中,TAC均优于所有竞争方法,展现出较低的幻觉率和精确的时间锚定能力。我们还提出了TAC-V,一种用于生成语义丰富的视听描述的视听流水线。进一步研究表明,TAC与TAC-V可作为纯文本推理器的“语义桥梁”:简单的TAC$\rightarrow$LLM与TAC-V$\rightarrow$LLM级联架构分别在音频理解推理(MMAU-Pro、MMSU、MMAR基准)和视听理解推理(DailyOmni、VideoHolmes基准)任务中取得了最先进的性能。