Large language models (LLMs) have shown remarkable ability in various language tasks, especially with their emergent in-context learning capability. Extending LLMs to incorporate visual inputs, large vision-language models (LVLMs) have shown impressive performance in tasks such as recognition and visual question answering (VQA). Despite increasing interest in the utility of LLMs in causal reasoning tasks such as causal discovery and counterfactual reasoning, there has been relatively little work showcasing the abilities of LVLMs on visual causal reasoning tasks. We take this opportunity to formally introduce a comprehensive causal reasoning benchmark for multi-modal in-context learning from LVLMs. Our CausalVLBench encompasses three representative tasks: causal structure inference, intervention target prediction, and counterfactual prediction. We evaluate the ability of state-of-the-art open-source LVLMs on our causal reasoning tasks across three causal representation learning datasets and demonstrate their fundamental strengths and weaknesses. We hope that our benchmark elucidates the drawbacks of existing vision-language models and motivates new directions and paradigms in improving the visual causal reasoning abilities of LVLMs.
翻译:大型语言模型(LLM)在各种语言任务中展现出卓越能力,尤其在上下文学习方面表现出涌现特性。通过扩展LLM以融合视觉输入,大型视觉语言模型(LVLM)在识别和视觉问答(VQA)等任务中取得了令人瞩目的性能。尽管学界对LLM在因果发现与反事实推理等因果推理任务中的应用日益关注,但关于LVLM在视觉因果推理任务中能力的研究仍相对匮乏。藉此契机,我们正式提出一个面向LVLM多模态上下文学习的综合性因果推理基准。我们的CausalVLBench涵盖三项代表性任务:因果结构推断、干预目标预测及反事实预测。我们在三个因果表征学习数据集上评估了当前最先进的开源LVLM在因果推理任务中的表现,并揭示了其核心优势与局限。我们期望该基准能够阐明现有视觉语言模型的不足,并为提升LVLM的视觉因果推理能力激发新的研究方向与范式。