Auditory Large Language Models (LLMs) have demonstrated strong performance across a wide range of speech and audio understanding tasks. Nevertheless, they often struggle when applied to low-resource or unfamiliar tasks. In case of labeled in-domain data is scarce or mismatched to the true test distribution, direct fine-tuning can be brittle. In-Context Learning (ICL) provides a training-free, inference-time solution by adapting auditory LLMs through conditioning on a few in-domain demonstrations. In this work, we first show that \emph{Vanilla ICL}, improves zero-shot performance across diverse speech and audio tasks for selected models which suggest this ICL adaptation capability can be generalized to multimodal setting. Building on this, we propose \textbf{Speech In-Context Learning Adaptation Training (SICL-AT)}, a post-training recipe utilizes only high resource speech data intending to strengthen model's in-context learning capability. The enhancement can generalize to audio understanding/reasoning task. Experiments indicate our proposed method consistently outperforms direct fine-tuning in low-resource scenario.
翻译:听觉大语言模型(LLMs)在广泛的语音和音频理解任务中展现出强大的性能。然而,当应用于低资源或陌生任务时,它们往往表现不佳。在标注领域内数据稀缺或与真实测试分布不匹配的情况下,直接微调可能效果不稳定。上下文学习(ICL)提供了一种无需训练、在推理时通过基于少量领域内示例进行条件化来适配听觉大语言模型的解决方案。在本工作中,我们首先表明,对于选定的模型,\emph{朴素ICL} 能够提升其在多种语音和音频任务上的零样本性能,这表明这种ICL适配能力可以推广到多模态场景。在此基础上,我们提出了 \textbf{语音上下文学习适配训练(SICL-AT)},这是一种仅利用高资源语音数据的后训练方案,旨在增强模型的上下文学习能力。这种增强可以泛化到音频理解/推理任务。实验表明,在低资源场景下,我们提出的方法始终优于直接微调。