Recent advancements in large multimodal models (LMMs) have shown strong capabilities in audio understanding. However, most systems rely solely on end-to-end reasoning, limiting interpretability and accuracy for tasks that require structured knowledge or specialized signal analysis. In this work, we present Audio-Maestro -- a tool-augmented audio reasoning framework that enables audio-language models to autonomously call external tools and integrate their timestamped outputs into the reasoning process. This design allows the model to analyze, transform, and interpret audio signals through specialized tools rather than relying solely on end-to-end inference. Experiments show that Audio-Maestro consistently improves general audio reasoning performance: Gemini-2.5-flash's average accuracy on MMAU-Test rises from 67.4% to 72.1%, DeSTA-2.5 from 58.3% to 62.8%, and GPT-4o from 60.8% to 63.9%. To our knowledge, Audio-Maestro is the first framework to integrate structured tool output into the large audio language model reasoning process.
翻译:近年来,大型多模态模型在音频理解方面展现出强大的能力。然而,大多数系统仅依赖于端到端的推理方式,这在需要结构化知识或专门信号分析的任务中限制了其可解释性与准确性。本文提出了Audio-Maestro——一个工具增强的音频推理框架,它使音频-语言模型能够自主调用外部工具,并将其带时间戳的输出结果整合到推理过程中。该设计使得模型能够通过专用工具来分析、转换和解释音频信号,而非仅仅依赖端到端的推断。实验表明,Audio-Maestro持续提升了通用音频推理性能:Gemini-2.5-flash在MMAU-Test上的平均准确率从67.4%提升至72.1%,DeSTA-2.5从58.3%提升至62.8%,GPT-4o从60.8%提升至63.9%。据我们所知,Audio-Maestro是首个将结构化工具输出整合到大型音频语言模型推理过程中的框架。