Weakly-supervised audio-visual video parsing (AVVP) seeks to detect audible, visible, and audio-visual events without temporal annotations. Previous work has emphasized refining global predictions through contrastive or collaborative learning, but neglected stable segment-level supervision and class-aware cross-modal alignment. To address this, we propose two strategies: (1) an exponential moving average (EMA)-guided pseudo supervision framework that generates reliable segment-level masks via adaptive thresholds or top-k selection, offering stable temporal guidance beyond video-level labels; and (2) a class-aware cross-modal agreement (CMA) loss that aligns audio and visual embeddings at reliable segment-class pairs, ensuring consistency across modalities while preserving temporal structure. Evaluations on LLP and UnAV-100 datasets shows that our method achieves state-of-the-art (SOTA) performance across multiple metrics.
翻译:弱监督视听视频解析(AVVP)旨在无需时序标注的情况下检测可听、可见及视听事件。先前研究侧重于通过对比学习或协作学习优化全局预测,但忽视了稳定的片段级监督和类感知的跨模态对齐。为此,我们提出两种策略:(1)基于指数移动平均(EMA)引导的伪监督框架,通过自适应阈值或top-k选择生成可靠的片段级掩码,提供超越视频级标签的稳定时序指导;(2)类感知跨模态一致性(CMA)损失,在可靠的片段-类别对上对齐音频与视觉嵌入,确保跨模态一致性的同时保持时序结构。在LLP和UnAV-100数据集上的评估表明,我们的方法在多项指标上均达到了最先进(SOTA)性能。