Dense audio-visual event localization (DAVE) aims to identify event categories and locate the temporal boundaries in untrimmed videos. Most studies only employ event-related semantic constraints on the final outputs, lacking cross-modal semantic bridging in intermediate layers. This causes modality semantic gap for further fusion, making it difficult to distinguish between event-related content and irrelevant background content. Moreover, they rarely consider the correlations between events, which limits the model to infer concurrent events among complex scenarios. In this paper, we incorporate multi-stage semantic guidance and multi-event relationship modeling, which respectively enable hierarchical semantic understanding of audio-visual events and adaptive extraction of event dependencies, thereby better focusing on event-related information. Specifically, our eventaware semantic guided network (ESG-Net) includes a early semantics interaction (ESI) module and a mixture of dependency experts (MoDE) module. ESI applys multi-stage semantic guidance to explicitly constrain the model in learning semantic information through multi-modal early fusion and several classification loss functions, ensuring hierarchical understanding of event-related content. MoDE promotes the extraction of multi-event dependencies through multiple serial mixture of experts with adaptive weight allocation. Extensive experiments demonstrate that our method significantly surpasses the state-of-the-art methods, while greatly reducing parameters and computational load. Our code will be released on https://github.com/uchiha99999/ESG-Net.
翻译:密集视听事件定位(DAVE)旨在从未修剪的视频中识别事件类别并定位其时间边界。现有研究大多仅对最终输出施加事件相关的语义约束,缺乏在中间层进行跨模态语义桥接。这导致模态间存在语义鸿沟,不利于进一步融合,使得模型难以区分事件相关内容和无关的背景内容。此外,它们很少考虑事件之间的关联性,这限制了模型在复杂场景中推断并发事件的能力。本文引入了多阶段语义引导和多事件关系建模,分别实现了对视听事件的分层语义理解以及对事件依赖关系的自适应提取,从而更好地聚焦于事件相关信息。具体而言,我们提出的事件感知语义引导网络(ESG-Net)包含一个早期语义交互(ESI)模块和一个依赖专家混合(MoDE)模块。ESI模块应用多阶段语义引导,通过多模态早期融合和多个分类损失函数显式约束模型学习语义信息,确保对事件相关内容的分层理解。MoDE模块通过多个具有自适应权重分配的串行专家混合模型,促进多事件依赖关系的提取。大量实验表明,我们的方法显著超越了现有最先进方法,同时大幅减少了参数量和计算负载。我们的代码将在 https://github.com/uchiha99999/ESG-Net 上发布。