Misleading visualizations are a potent driver of misinformation on social media and the web. By violating chart design principles, they distort data and lead readers to draw inaccurate conclusions. Prior work has shown that both humans and multimodal large language models (MLLMs) are frequently deceived by such visualizations. Automatically detecting misleading visualizations and identifying the specific design rules they violate could help protect readers and reduce the spread of misinformation. However, the training and evaluation of AI models has been limited by the absence of large, diverse, and openly available datasets. In this work, we introduce Misviz, a benchmark of 2,604 real-world visualizations annotated with 12 types of misleaders. To support model training, we also create Misviz-synth, a synthetic dataset of 57,665 visualizations generated using Matplotlib and based on real-world data tables. We perform a comprehensive evaluation on both datasets using state-of-the-art MLLMs, rule-based systems, and image-axis classifiers. Our results reveal that the task remains highly challenging. We release Misviz, Misviz-synth, and the accompanying code.
翻译:误导性可视化是社交媒体和网络平台上错误信息传播的重要推手。这类图表通过违反设计准则扭曲数据,导致读者得出错误结论。先前研究表明,人类与多模态大语言模型(MLLMs)均易受此类可视化误导。若能自动检测误导性可视化并识别其违反的具体设计规则,将有助于保护读者并遏制错误信息传播。然而,现有AI模型的训练与评估一直受限于缺乏大规模、多样化且公开可用的数据集。本研究提出Misviz基准数据集,包含2,604个标注了12类误导特征的真实世界可视化图表。为支持模型训练,我们同时创建了Misviz-synth合成数据集,该数据集基于真实数据表格通过Matplotlib生成,包含57,665个可视化样本。我们采用前沿的多模态大语言模型、基于规则的系统以及图像坐标轴分类器对两个数据集进行全面评估。实验结果表明该任务仍具高度挑战性。我们公开了Misviz、Misviz-synth数据集及相关代码。