Recent breakthroughs in Multimodal Large Language Models (MLLMs) have gained significant recognition within the deep learning community, where the fusion of the Video Foundation Models (VFMs) and Large Language Models(LLMs) has proven instrumental in constructing robust video understanding systems, effectively surmounting constraints associated with predefined visual tasks. These sophisticated MLLMs exhibit remarkable proficiency in comprehending videos, swiftly attaining unprecedented performance levels across diverse benchmarks. However, their operation demands substantial memory and computational resources, underscoring the continued importance of traditional models in video comprehension tasks. In this paper, we introduce a novel learning paradigm termed MLLM4WTAL. This paradigm harnesses the potential of MLLM to offer temporal action key semantics and complete semantic priors for conventional Weakly-supervised Temporal Action Localization (WTAL) methods. MLLM4WTAL facilitates the enhancement of WTAL by leveraging MLLM guidance. It achieves this by integrating two distinct modules: Key Semantic Matching (KSM) and Complete Semantic Reconstruction (CSR). These modules work in tandem to effectively address prevalent issues like incomplete and over-complete outcomes common in WTAL methods. Rigorous experiments are conducted to validate the efficacy of our proposed approach in augmenting the performance of various heterogeneous WTAL models.
翻译:近年来,多模态大语言模型(MLLMs)在深度学习领域取得了重大突破,其中视频基础模型(VFMs)与大语言模型(LLMs)的融合被证明对于构建鲁棒的视频理解系统至关重要,有效克服了预定义视觉任务相关的限制。这些先进的MLLMs在理解视频方面表现出卓越的能力,迅速在多种基准测试中达到了前所未有的性能水平。然而,其运行需要大量的内存和计算资源,这凸显了传统模型在视频理解任务中持续的重要性。本文提出了一种新颖的学习范式,称为MLLM4WTAL。该范式利用MLLM的潜力,为传统的弱监督时序动作定位(WTAL)方法提供时序动作关键语义和完整语义先验。MLLM4WTAL通过利用MLLM的指导来促进WTAL的增强。它通过整合两个不同的模块实现这一点:关键语义匹配(KSM)和完整语义重建(CSR)。这些模块协同工作,有效解决WTAL方法中常见的不完整和过度完整结果等问题。我们进行了严格的实验,以验证我们提出的方法在提升多种异构WTAL模型性能方面的有效性。