Large Audio Language Models (LALMs), which couple acoustic perception with large language models (LLMs) to extract and understand diverse information from audio, have attracted intense interest from both academic and industrial communities. However, existing LALMs are highly sensitive to how instructions are phrased, affecting both (i) instruction-following rates and (ii) task performance. Yet, no existing benchmarks offer a systematic and comprehensive evaluation of this sensitivity. We introduce ISA-Bench, a dynamic benchmark evaluating instruction sensitivity for LALMs along three axes: instruction description, output format, and task composition. We assess recent open-source and proprietary LALMs using ISA-Bench, profiling both compliance and accuracy under controlled instruction variations. Experimental results reveal that even state-of-the-art LALMs suffer significant instruction sensitivity, leading to degraded performance on fundamental audio understanding tasks. To mitigate this issue, we fine-tune Qwen2-Audio on a specifically constructed complex instruction-variant dataset, achieving a marked improvement in instruction-following performance. However, this also induces nontrivial catastrophic forgetting: the model loses some previously mastered task capabilities when exposed to new instruction styles. Our benchmark provides a standardized basis for assessing and improving instruction sensitivity in LALMs, underscoring the need for instruction-robust audio understanding in real-world pipelines.
翻译:大型音频语言模型通过将声学感知与大语言模型相结合,以提取和理解音频中的多样化信息,已引起学术界和工业界的广泛关注。然而,现有的大型音频语言模型对指令表述方式极为敏感,这既影响指令遵循率,也影响任务性能。目前尚无基准测试能对此敏感性进行系统且全面的评估。我们提出ISA-Bench,这是一个从指令描述、输出格式和任务组合三个维度评估大型音频语言模型指令敏感性的动态基准。我们使用ISA-Bench对近期开源及专有的大型音频语言模型进行评估,在受控的指令变体下分析其遵循度与准确性。实验结果表明,即使是最先进的大型音频语言模型也存在显著的指令敏感性,导致其在基础音频理解任务上的性能下降。为缓解此问题,我们在专门构建的复杂指令变体数据集上对Qwen2-Audio进行微调,实现了指令遵循性能的显著提升。但这也引发了不可忽视的灾难性遗忘现象:模型在接触新指令风格时,会丧失部分已掌握的任务能力。本基准为评估和改进大型音频语言模型的指令敏感性提供了标准化依据,凸显了实际应用流程中对指令鲁棒性音频理解能力的迫切需求。