Visual arguments, often used in advertising or social causes, rely on images to persuade viewers to do or believe something. Understanding these arguments requires selective vision: only specific visual stimuli within an image are relevant to the argument, and relevance can only be understood within the context of a broader argumentative structure. While visual arguments are readily appreciated by human audiences, we ask: are today's AI capable of similar understanding? We collect and release VisArgs, an annotated corpus designed to make explicit the (usually implicit) structures underlying visual arguments. VisArgs includes 1,611 images accompanied by three types of textual annotations: 5,112 visual premises (with region annotations), 5,574 commonsense premises, and reasoning trees connecting them to a broader argument. We propose three tasks over VisArgs to probe machine capacity for visual argument understanding: localization of premises, identification of premises, and deduction of conclusions. Experiments demonstrate that 1) machines cannot fully identify the relevant visual cues. The top-performing model, GPT-4-O, achieved an accuracy of only 78.5%, whereas humans reached 98.0%. All models showed a performance drop, with an average decrease in accuracy of 19.5%, when the comparison set was changed from objects outside the image to irrelevant objects within the image. Furthermore, 2) this limitation is the greatest factor impacting their performance in understanding visual arguments. Most models improved the most when given relevant visual premises as additional inputs, compared to other inputs, for deducing the conclusion of the visual argument.
翻译:视觉论证常见于广告或社会公益宣传,通过图像说服观众采取行动或相信某种观点。理解这类论证需要选择性视觉:图像中仅有特定的视觉刺激与论证相关,且相关性只有在更广泛的论证结构语境中才能被理解。尽管人类观众能轻松领会视觉论证,我们不禁要问:当前的人工智能是否具备类似的理解能力?我们收集并发布了VisArgs,这是一个带标注的语料库,旨在显式揭示视觉论证背后(通常隐含的)结构。VisArgs包含1,611张图像,并配有三种文本标注:5,112个视觉前提(附带区域标注)、5,574个常识前提,以及将它们与更广泛论证相连接的推理树。我们基于VisArgs提出了三项任务,以探究机器理解视觉论证的能力:前提定位、前提识别与结论推导。实验表明:1)机器无法完全识别相关的视觉线索。表现最佳的模型GPT-4-O准确率仅为78.5%,而人类达到98.0%。当对比集从图像外对象改为图像内无关对象时,所有模型均出现性能下降,平均准确率降低19.5%。此外,2)这一局限性是影响其理解视觉论证表现的最大因素。在推导视觉论证结论时,相较于其他输入,大多数模型在获得相关视觉前提作为额外输入时提升最为显著。