Audio-Language Models (ALM) aim to be general-purpose audio models by providing zero-shot capabilities at test time. The zero-shot performance of ALM improves by using suitable text prompts for each domain. The text prompts are usually hand-crafted through an ad-hoc process and lead to a drop in ALM generalization and out-of-distribution performance. Existing approaches to improve domain performance, like few-shot learning or fine-tuning, require access to annotated data and iterations of training. Therefore, we propose a test-time domain adaptation method for ALMs that does not require access to annotations. Our method learns a domain vector by enforcing consistency across augmented views of the testing audio. We extensively evaluate our approach on 12 downstream tasks across domains. With just one example, our domain adaptation method leads to 3.2% (max 8.4%) average zero-shot performance improvement. After adaptation, the model still retains the generalization property of ALMs.
翻译:音频-语言模型旨在通过提供测试时的零样本能力,成为通用音频模型。通过为每个领域使用合适的文本提示,ALM的零样本性能得以提升。文本提示通常通过临时的手工过程制作,这会导致ALM的泛化能力和分布外性能下降。现有的提升领域性能的方法,如少样本学习或微调,需要访问标注数据并进行多次训练迭代。因此,我们提出一种用于ALM的测试时领域自适应方法,该方法无需访问标注。我们的方法通过学习一个领域向量,通过强制测试音频的多个增强视图之间的一致性来实现。我们在12个跨领域的下游任务上广泛评估了我们的方法。仅使用一个示例,我们的领域自适应方法即可实现平均3.2%(最高8.4%)的零样本性能提升。自适应后,模型仍保留了ALM的泛化特性。