Visual Commonsense Reasoning (VCR) is a cognitive task, challenging models to answer visual questions requiring human commonsense, and to provide rationales explaining why the answers are correct. With emergence of Large Language Models (LLMs), it is natural and imperative to explore their applicability to VCR. However, VCR task demands more external knowledge to tackle its challenging questions, necessitating special designs to activate LLMs' commonsense reasoning abilities. Also, most existing Multimodal LLMs adopted an abstraction of entire input image, which makes it difficult to comprehend VCR's unique co-reference tags between image regions and text, posing challenges for fine-grained alignment. To address these issues, we propose EventLens that leverages Event-Aware Pretraining and Cross-modal Linking and EnhanceS VCR. First, by emulating the cognitive process of human reasoning, an Event-Aware Pretraining auxiliary task is introduced to better activate LLM's global comprehension of intricate scenarios. Second, during fine-tuning, we further utilize reference tags to bridge RoI features with texts, while preserving both modality semantics. Finally, we use instruct-style prompts to narrow the gap between pretraining and fine-tuning, and task-specific adapters to better integrate LLM's inherent knowledge with new commonsense. Experimental results show the effectiveness of our proposed auxiliary task and fine-grained linking strategy.
翻译:视觉常识推理(VCR)是一项认知任务,要求模型回答需要人类常识的视觉问题,并提供解释答案正确性的理由。随着大语言模型(LLMs)的出现,探索其在VCR中的适用性既自然又必要。然而,VCR任务需要更多外部知识来应对具有挑战性的问题,因此需要特殊设计来激活LLMs的常识推理能力。此外,大多数现有多模态LLM采用对整个输入图像的抽象处理,这使得理解VCR独特的图像区域与文本之间的共指标签变得困难,从而对细粒度对齐提出了挑战。为解决这些问题,我们提出EventLens,它通过事件感知预训练与跨模态链接来增强VCR。首先,通过模拟人类推理的认知过程,引入事件感知预训练辅助任务,以更好地激活LLM对复杂场景的全局理解。其次,在微调过程中,我们进一步利用参考标签桥接感兴趣区域(RoI)特征与文本,同时保留两种模态的语义。最后,我们使用指令式提示缩小预训练与微调之间的差距,并通过任务特定适配器更好地整合LLM固有知识与新常识。实验结果表明了我们提出的辅助任务和细粒度链接策略的有效性。