Video-text retrieval (VTR) aims to locate relevant videos using natural language queries. Current methods, often based on pre-trained models like CLIP, are hindered by video's inherent redundancy and their reliance on coarse, final-layer features, limiting matching accuracy. To address this, we introduce the HVP-Net (Hierarchical Visual Perception Network), a framework that mines richer video semantics by extracting and refining features from multiple intermediate layers of a vision encoder. Our approach progressively distills salient visual concepts from raw patch-tokens at different semantic levels, mitigating redundancy while preserving crucial details for alignment. This results in a more robust video representation, leading to new state-of-the-art performance on challenging benchmarks including MSRVTT, DiDeMo, and ActivityNet. Our work validates the effectiveness of exploiting hierarchical features for advancing video-text retrieval. Our codes are available at https://github.com/boyun-zhang/HVP-Net.
翻译:视频-文本检索(VTR)旨在通过自然语言查询定位相关视频。当前方法通常基于如CLIP等预训练模型,但受到视频固有冗余性以及对粗糙的最终层特征的依赖所阻碍,从而限制了匹配精度。为解决此问题,我们提出了HVP-Net(层次化视觉感知网络),这是一个通过从视觉编码器的多个中间层提取并精炼特征来挖掘更丰富视频语义的框架。我们的方法从不同语义层次的原始补丁标记中逐步蒸馏出显著的视觉概念,在保留用于对齐的关键细节的同时减轻冗余。这产生了一种更鲁棒的视频表示,从而在包括MSRVTT、DiDeMo和ActivityNet在内的具有挑战性的基准测试中取得了新的最先进性能。我们的工作验证了利用层次化特征对于推进视频-文本检索的有效性。我们的代码可在 https://github.com/boyun-zhang/HVP-Net 获取。