The Audio Question Answering (AQA) task includes audio event classification, audio captioning, and open-ended reasoning. Recently, AQA has garnered attention due to the advent of Large Audio Language Models (LALMs). Current literature focuses on constructing LALMs by integrating audio encoders with text-only Large Language Models (LLMs) through a projection module. While LALMs excel in general audio understanding, they are limited in temporal reasoning, which may hinder their commercial applications and on-device deployment. This paper addresses these challenges and limitations in audio temporal reasoning. First, we introduce a data augmentation technique for generating reliable audio temporal questions and answers using an LLM. Second, we perform a further fine-tuning of an existing baseline using curriculum learning strategy to specialize in temporal reasoning without compromising performance on fine-tuned tasks. We demonstrate the performance of our model using state-of-the-art LALMs on public audio benchmark datasets. Third, we implement our AQA model on-device locally and investigate its CPU inference for edge applications.
翻译:音频问答任务涵盖音频事件分类、音频描述生成及开放式推理。近年来,随着大型音频语言模型的出现,该任务受到广泛关注。现有研究主要通过投影模块将音频编码器与纯文本大型语言模型结合以构建LALM。尽管LALM在通用音频理解方面表现优异,但其在时序推理能力上存在局限,可能阻碍商业化应用与端侧部署。本文针对音频时序推理中的挑战与局限展开研究:首先,提出基于LLM生成可靠音频时序问答的数据增强技术;其次,采用课程学习策略对现有基线模型进行专项微调,在保持微调任务性能的同时强化时序推理能力,并在公开音频基准数据集上通过前沿LALM验证模型性能;最后,实现端侧本地化部署的AQA模型,并探索其在边缘计算场景下的CPU推理方案。