Multimodal foundation models (MFMs) have demonstrated significant success in tasks such as visual captioning, question answering, and image-text retrieval. However, these models face inherent limitations due to their finite internal capacity, which restricts their ability to process extended temporal sequences, a crucial requirement for comprehensive video and audio analysis. To overcome these challenges, we introduce a specialized cognitive module, temporal working memory (TWM), which aims to enhance the temporal modeling capabilities of MFMs. It selectively retains task-relevant information across temporal dimensions, ensuring that critical details are preserved throughout the processing of video and audio content. The TWM uses a query-guided attention approach to focus on the most informative multimodal segments within temporal sequences. By retaining only the most relevant content, TWM optimizes the use of the model's limited capacity, enhancing its temporal modeling ability. This plug-and-play module can be easily integrated into existing MFMs. With our TWM, nine state-of-the-art models exhibit significant performance improvements across tasks such as video captioning, question answering, and video-text retrieval. By enhancing temporal modeling, TWM extends the capability of MFMs to handle complex, time-sensitive data effectively. Our code is available at https://github.com/xid32/NAACL_2025_TWM.
翻译:多模态基础模型(MFMs)在视觉描述、问答和图文检索等任务中已展现出显著成效。然而,由于模型内部容量有限,这些模型在处理长时序序列时存在固有局限,而这正是全面视频与音频分析的关键需求。为克服这些挑战,我们引入了一种专用认知模块——时序工作记忆(TWM),旨在增强MFMs的时序建模能力。该模块在时序维度上选择性保留任务相关信息,确保视频与音频内容处理过程中关键细节得以持续保存。TWM采用查询引导的注意力机制,聚焦于时序序列中最具信息量的多模态片段。通过仅保留最相关内容,TWM优化了模型有限容量的利用效率,从而提升其时序建模能力。这一即插即用模块可轻松集成至现有MFMs中。实验表明,搭载TWM后,九种前沿模型在视频描述、问答和视频-文本检索等任务上均取得显著性能提升。通过增强时序建模能力,TWM有效拓展了MFMs处理复杂时敏数据的能力。相关代码已发布于 https://github.com/xid32/NAACL_2025_TWM。