Multipanel images, commonly seen as web screenshots, posters, etc., pervade our daily lives. These images, characterized by their composition of multiple subfigures in distinct layouts, effectively convey information to people. Toward building advanced multimodal AI applications, such as agents that understand complex scenes and navigate through webpages, the skill of multipanel visual reasoning is essential, and a comprehensive evaluation of models in this regard is important. Therefore, we introduce Multipanel Visual Question Answering (MultipanelVQA), a novel benchmark comprising 6,600 triplets of questions, answers, and multipanel images that specifically challenge models in comprehending multipanel images. Our evaluation shows that questions in the MultipanelVQA benchmark pose significant challenges to the state-of-the-art Large Vision Language Models (LVLMs) tested, even though humans can attain approximately 99\% accuracy on these questions. Distinctively, the MultipanelVQA benchmark features synthetically generated multipanel images specifically crafted to isolate and assess the impact of various factors, such as the layout, on LVLMs' multipanel image comprehension abilities. As a result, in addition to benchmarking the capabilities of LVLMs in understanding multipanel images, we analyze the potential causes for LVLMs' performance and offer insights for enhancement with the synthetic data. Code and data are released at https://sites.google.com/view/multipanelvqa/home.
翻译:多面板图像(如网页截图、海报等)普遍存在于日常生活中。这类图像通过不同布局组合多个子图,能有效传递信息。为构建能够理解复杂场景并导航网页的智能体等先进多模态AI应用,多面板视觉推理能力至关重要,而全面评估模型在此方面的性能同样重要。为此,我们提出多面板视觉问答(MultipanelVQA)这一新型基准测试,包含6,600组由问题、答案及多面板图像组成的三元组,专门挑战模型对多面板图像的理解能力。评估显示,尽管人类对这些问题的准确率可达约99%,MultipanelVQA基准中的问题仍对当前最先进的大型视觉语言模型(LVLMs)构成显著挑战。独特的是,该基准采用合成生成的多面板图像,可隔离并评估布局等各种因素对LVLMs多面板图像理解能力的影响。因此,除基准测试能力外,我们还分析了LVLMs性能的潜在成因,并借助合成数据提出改进见解。代码与数据已发布于https://sites.google.com/view/multipanelvqa/home。