Large audio-language models (LALMs) exhibit strong zero-shot capabilities in multiple downstream tasks, such as audio question answering (AQA) and abstract reasoning; however, these models still lag behind specialized models for certain discriminative tasks (e.g., audio classification). Recent studies show that sparse subsets of attention heads within an LALM can serve as strong discriminative feature extractors for downstream tasks such as classification via simple voting schemes. However, these methods assign uniform weights to all selected heads, implicitly assuming that each head contributes equally across all semantic categories. In this work, we propose Class-Conditional Sparse Attention Vectors for Large Audio-Language Models, a few-shot classification method that learns class-dependent importance weights over attention heads. This formulation allows individual heads to specialize in distinct semantic categories and to contribute to ensemble predictions proportionally to their estimated reliability. Experiments on multiple few-shot audio and audiovisual classification benchmarks and tasks demonstrate that our method consistently outperforms state-of-the-art uniform voting-based approaches by up to 14.52%, 1.53%, 8.35% absolute gains for audio classification, audio-visual classification, and spoofing detection respectively.
翻译:大型音频-语言模型(LALM)在音频问答(AQA)和抽象推理等多个下游任务中展现出强大的零样本能力;然而,对于某些判别性任务(例如音频分类),这些模型仍落后于专用模型。近期研究表明,LALM中注意力头的稀疏子集可通过简单投票方案作为下游任务(如分类)的强判别性特征提取器。然而,这些方法为所有选定的注意力头分配统一权重,隐含假设每个头在所有语义类别中贡献均等。本文提出面向大型音频-语言模型的类条件稀疏注意力向量,这是一种学习注意力头上类依赖重要性权重的少样本分类方法。该框架允许单个注意力头专注于不同的语义类别,并按其估计可靠性比例参与集成预测。在多个少样本音频与视听分类基准及任务上的实验表明,本方法在音频分类、视听分类和欺骗检测任务上分别以14.52%、1.53%、8.35%的绝对增益持续优于最先进的基于统一投票的方法。