Integrating audio and visual data for training multimodal foundational models remains challenging. We present Audio-Video Vector Alignment (AVVA), which aligns audiovisual (AV) scene content beyond mere temporal synchronization via a Large Language Model (LLM)-based data curation pipeline. Specifically, AVVA scores and selects high-quality training clips using Whisper (speech-based audio foundation model) for audio and DINOv2 for video within a dual-encoder contrastive learning framework. Evaluations on AudioCaps, VALOR, and VGGSound demonstrate that this approach can achieve significant accuracy gains with substantially less curated data. For instance, AVVA yields a 7.6% improvement in top-1 accuracy for audio-to-video retrieval on VGGSound compared to ImageBind, despite training on only 192 hours of carefully filtered data (vs. 5800+ hours). Moreover, an ablation study highlights that trading data quantity for data quality improves performance, yielding respective top-3 accuracy increases of 47.8, 48.4, and 58.0 percentage points on AudioCaps, VALOR, and VGGSound over uncurated baselines. While these results underscore AVVA's data efficiency, we also discuss the overhead of LLM-driven curation and how it may be scaled or approximated in larger domains. Overall, AVVA provides a viable path toward more robust, text-free audiovisual learning with improved retrieval accuracy.
翻译:整合音频与视觉数据以训练多模态基础模型仍面临挑战。本文提出音视频向量对齐方法,该方法通过基于大型语言模型的数据筛选流程,实现了超越单纯时间同步的音视频场景内容对齐。具体而言,AVVA在双编码器对比学习框架内,利用Whisper(基于语音的音频基础模型)处理音频、DINOv2处理视频,对训练片段进行质量评分与筛选。在AudioCaps、VALOR和VGGSound数据集上的评估表明,该方法能够通过显著减少的筛选数据实现准确率的大幅提升。例如,在VGGSound数据集上,AVVA仅使用192小时精心筛选的数据进行训练(对比ImageBind使用的5800+小时数据),却在音频到视频检索任务中实现了7.6%的top-1准确率提升。此外,消融实验证明,以数据质量换取数据量能有效提升模型性能:在AudioCaps、VALOR和VGGSound数据集上,相较于未经筛选的基线模型,AVVA分别实现了47.8、48.4和58.0个百分点的top-3准确率提升。这些结果在凸显AVVA数据高效性的同时,我们也探讨了LLM驱动筛选的计算开销及其在大规模领域中的扩展与近似方案。总体而言,AVVA为构建更鲁棒、无需文本标注的音视频学习系统提供了可行路径,并显著提升了检索准确率。