Understanding data visualizations like charts and plots requires reasoning about both visual elements and numerics. Although strong in extractive questions, current chart visual question answering (chart VQA) models suffer on complex reasoning questions. In this work, we address the lack of reasoning ability by data augmentation. We leverage Large Language Models (LLMs), which have shown to have strong reasoning ability, as an automatic data annotator that generates question-answer annotations for chart images. The key innovation in our method lies in the Synthesize Step-by-Step strategy: our LLM-based data generator learns to decompose the complex question into step-by-step sub-questions (rationales), which are then used to derive the final answer using external tools, i.e. Python. This step-wise generation procedure is trained on synthetic data generated using a template-based QA generation pipeline. Experimental results highlight the significance of the proposed step-by-step generation. By training with the LLM-augmented data (LAMENDA), we significantly enhance the chart VQA models, achieving the state-of-the-art accuracy on the ChartQA and PlotQA datasets. In particular, our approach improves the accuracy of the previous state-of-the-art approach from 38% to 54% on the human-written questions in the ChartQA dataset, which needs strong reasoning. We hope our work underscores the potential of synthetic data and encourages further exploration of data augmentation using LLMs for reasoning-heavy tasks.
翻译:理解图表和绘图等数据可视化需要同时推理视觉元素和数值信息。尽管当前图表视觉问答(Chart VQA)模型在提取性问题中表现强劲,但在复杂推理问题上仍存在不足。本研究通过数据增强解决推理能力缺失的问题。我们利用具有强大推理能力的大型语言模型(LLMs)作为自动数据标注器,为图表图像生成问答标注。我们方法的核心创新在于“逐步合成”策略:基于LLM的数据生成器学会将复杂问题分解为逐步子问题(推理链),进而借助外部工具(即Python)推导出最终答案。这一逐步生成过程通过基于模板的问答生成流程生成的合成数据进行训练。实验结果凸显了所提出的逐步生成方法的重要性。通过使用LLM增强数据(LAMENDA)进行训练,我们显著提升了图表VQA模型性能,在ChartQA和PlotQA数据集上达到了最先进的准确率。特别是在ChartQA数据集的人工撰写问题(需要强推理能力)上,我们的方法将先前最先进方法的准确率从38%提升至54%。我们希望这项工作能凸显合成数据的潜力,并鼓励在推理密集型任务中进一步探索基于LLM的数据增强方法。