Recent works in cross-modal understanding and generation, notably through models like CLAP (Contrastive Language-Audio Pretraining) and CAVP (Contrastive Audio-Visual Pretraining), have significantly enhanced the alignment of text, video, and audio embeddings via a single contrastive loss. However, these methods often overlook the bidirectional interactions and inherent noises present in each modality, which can crucially impact the quality and efficacy of cross-modal integration. To address this limitation, we introduce DiffGAP, a novel approach incorporating a lightweight generative module within the contrastive space. Specifically, our DiffGAP employs a bidirectional diffusion process tailored to bridge the cross-modal gap more effectively. This involves a denoising process on text and video embeddings conditioned on audio embeddings and vice versa, thus facilitating a more nuanced and robust cross-modal interaction. Our experimental results on VGGSound and AudioCaps datasets demonstrate that DiffGAP significantly improves performance in video/text-audio generation and retrieval tasks, confirming its effectiveness in enhancing cross-modal understanding and generation capabilities.
翻译:近年来,跨模态理解与生成领域的研究,特别是通过CLAP(对比语言-音频预训练)和CAVP(对比视听预训练)等模型,已通过单一的对比损失显著提升了文本、视频和音频嵌入之间的对齐效果。然而,这些方法往往忽略了各模态间存在的双向交互及其固有的噪声,而这些因素对跨模态融合的质量与效能具有关键影响。为克服这一局限,我们提出了DiffGAP——一种在对比空间中引入轻量级生成模块的新方法。具体而言,我们的DiffGAP采用了一种专为更有效弥合跨模态差异而设计的双向扩散过程。该过程包括以音频嵌入为条件的文本与视频嵌入去噪,以及反之亦然的操作,从而促进更细致、更鲁棒的跨模态交互。我们在VGGSound和AudioCaps数据集上的实验结果表明,DiffGAP在视频/文本-音频生成与检索任务中显著提升了性能,证实了其在增强跨模态理解与生成能力方面的有效性。