We present DenseAV, a novel dual encoder grounding architecture that learns high-resolution, semantically meaningful, and audio-visually aligned features solely through watching videos. We show that DenseAV can discover the ``meaning'' of words and the ``location'' of sounds without explicit localization supervision. Furthermore, it automatically discovers and distinguishes between these two types of associations without supervision. We show that DenseAV's localization abilities arise from a new multi-head feature aggregation operator that directly compares dense image and audio representations for contrastive learning. In contrast, many other systems that learn ``global'' audio and video representations cannot localize words and sound. Finally, we contribute two new datasets to improve the evaluation of AV representations through speech and sound prompted semantic segmentation. On these and other datasets we show DenseAV dramatically outperforms the prior art on speech and sound prompted semantic segmentation. DenseAV outperforms the previous state-of-the-art, ImageBind, on cross-modal retrieval using fewer than half of the parameters. Project Page: \href{https://aka.ms/denseav}{https://aka.ms/denseav}
翻译:我们提出了DenseAV,一种新颖的双编码器定位架构,它仅通过观看视频即可学习高分辨率、语义丰富且视听对齐的特征。我们证明,DenseAV能够在没有显式定位监督的情况下,发现词语的“含义”与声音的“位置”。此外,它能够自动发现并区分这两种关联类型,且无需监督。我们表明,DenseAV的定位能力源于一种新颖的多头特征聚合算子,该算子直接比较密集的图像与音频表征以进行对比学习。相比之下,许多学习“全局”音频与视频表征的其他系统无法定位词语和声音。最后,我们贡献了两个新的数据集,通过语音和声音提示的语义分割来改进视听表征的评估。在这些数据集及其他数据集上,我们证明DenseAV在语音和声音提示的语义分割任务上显著优于现有技术。在跨模态检索任务中,DenseAV以少于一半的参数数量超越了先前的最优模型ImageBind。项目页面:\href{https://aka.ms/denseav}{https://aka.ms/denseav}