Charts are a universally adopted medium for data communication, yet existing chart understanding benchmarks are overwhelmingly English-centric, limiting their accessibility and relevance to global audiences. To address this limitation, we introduce PolyChartQA, the first large-scale multilingual benchmark for chart question answering, comprising 22,606 charts and 26,151 QA pairs across 10 diverse languages. PolyChartQA is constructed through a scalable pipeline that enables efficient multilingual chart generation via data translation and code reuse, supported by LLM-based translation and rigorous quality control. We systematically evaluate multilingual chart understanding with PolyChartQA on state-of-the-art LVLMs and reveal a significant performance gap between English and other languages, particularly low-resource ones. Additionally, we introduce a companion multilingual chart question answering training set, PolyChartQA-Train, on which fine-tuning LVLMs yields substantial gains in multilingual chart understanding across diverse model sizes and architectures. Together, our benchmark provides a foundation for developing globally inclusive vision-language models capable of understanding charts across diverse linguistic contexts.
翻译:图表是数据交流中普遍采用的媒介,然而现有的图表理解基准测试绝大多数以英语为中心,限制了其对全球受众的可访问性和相关性。为应对这一局限,我们提出了PolyChartQA——首个大规模多语言图表问答基准测试,涵盖10种不同语言,包含22,606张图表和26,151组问答对。PolyChartQA通过可扩展的构建流程实现,该流程借助基于大语言模型的翻译和严格的质量控制,通过数据翻译与代码复用的方式实现了高效的多语言图表生成。我们使用PolyChartQA对当前最先进的LVLM进行了系统性的多语言图表理解评估,结果显示英语与其他语言(尤其是低资源语言)之间存在显著的性能差距。此外,我们还发布了配套的多语言图表问答训练集PolyChartQA-Train,在该数据集上对LVLM进行微调后,不同模型规模和架构的模型在多语言图表理解能力上均获得了显著提升。我们的基准测试共同为开发具有跨语言图表理解能力的全球包容性视觉-语言模型奠定了基础。