While automated audio captioning (AAC) has made notable progress, traditional fully supervised AAC models still face two critical challenges: the need for expensive audio-text pair data for training and performance degradation when transferring across domains. To overcome these limitations, we present DRCap, a data-efficient and flexible zero-shot audio captioning system that requires text-only data for training and can quickly adapt to new domains without additional fine-tuning. DRCap integrates a contrastive language-audio pre-training (CLAP) model and a large-language model (LLM) as its backbone. During training, the model predicts the ground-truth caption with a fixed text encoder from CLAP, whereas, during inference, the text encoder is replaced with the audio encoder to generate captions for audio clips in a zero-shot manner. To mitigate the modality gap of the CLAP model, we use both the projection strategy from the encoder side and the retrieval-augmented generation strategy from the decoder side. Specifically, audio embeddings are first projected onto a text embedding support to absorb extensive semantic information within the joint multi-modal space of CLAP. At the same time, similar captions retrieved from a datastore are fed as prompts to instruct the LLM, incorporating external knowledge to take full advantage of its strong generative capability. Conditioned on both the projected CLAP embedding and the retrieved similar captions, the model is able to produce a more accurate and semantically rich textual description. By tailoring the text embedding support and the caption datastore to the target domain, DRCap acquires a robust ability to adapt to new domains in a training-free manner. Experimental results demonstrate that DRCap outperforms all other zero-shot models in in-domain scenarios and achieves state-of-the-art performance in cross-domain scenarios.
翻译:尽管自动音频描述(AAC)已取得显著进展,传统全监督AAC模型仍面临两大关键挑战:训练需要昂贵的音频-文本配对数据,且跨领域迁移时性能下降。为克服这些限制,本文提出DRCap——一种数据高效且灵活的零样本音频描述系统,仅需纯文本数据进行训练,并能快速适应新领域而无需额外微调。DRCap以对比语言-音频预训练(CLAP)模型和大语言模型(LLM)为骨干。训练阶段,模型通过CLAP的固定文本编码器预测真实描述文本;推理阶段则用音频编码器替换文本编码器,以零样本方式为音频片段生成描述。为弥合CLAP模型的模态鸿沟,我们同时采用编码器侧的投影策略和解码器侧的检索增强生成策略。具体而言,音频嵌入首先被投影到文本嵌入支撑集上,以吸收CLAP联合多模态空间中的丰富语义信息;同时,从数据存储中检索的相似描述作为提示输入LLM,通过引入外部知识充分发挥其强大生成能力。在投影后的CLAP嵌入与检索相似描述的双重条件下,模型能够生成更准确且语义丰富的文本描述。通过针对目标领域定制文本嵌入支撑集与描述数据存储,DRCap获得了无需训练即可适应新领域的强大能力。实验结果表明,DRCap在领域内场景中优于所有其他零样本模型,并在跨领域场景中取得了最先进的性能。