Current video retrieval systems, especially those used in competitions, primarily focus on querying individual keyframes or images rather than encoding an entire clip or video segment. However, queries often describe an action or event over a series of frames, not a specific image. This results in insufficient information when analyzing a single frame, leading to less accurate query results. Moreover, extracting embeddings solely from images (keyframes) does not provide enough information for models to encode higher-level, more abstract insights inferred from the video. These models tend to only describe the objects present in the frame, lacking a deeper understanding. In this work, we propose a system that integrates the latest methodologies, introducing a novel pipeline that extracts multimodal data, and incorporate information from multiple frames within a video, enabling the model to abstract higher-level information that captures latent meanings, focusing on what can be inferred from the video clip, rather than just focusing on object detection in one single image.
翻译:当前视频检索系统,尤其是在竞赛中使用的系统,主要侧重于查询单个关键帧或图像,而非编码整个片段或视频段落。然而,查询通常描述的是跨多个帧的动作或事件,而非特定图像。这导致在分析单帧时信息不足,进而使查询结果准确性降低。此外,仅从图像(关键帧)提取嵌入向量无法为模型提供足够信息以编码从视频中推断出的更高层次、更抽象的洞察。这些模型往往仅描述帧中存在的对象,缺乏更深层次的理解。在本研究中,我们提出一个整合最新方法的系统,引入一种新颖的流程来提取多模态数据,并融合视频中多帧的信息,使模型能够抽象出捕捉潜在含义的更高层次信息,重点关注从视频片段中可推断的内容,而非仅聚焦于单张图像中的对象检测。