We present a quantitative evaluation to understand the effect of zero-shot large-language model (LLMs) and prompting uses on chart reading tasks. We asked LLMs to answer 107 visualization questions to compare inference accuracies between the agentic GPT-5 and multimodal GPT-4V, for difficult image instances, where GPT-4V failed to produce correct answers. Our results show that model architecture dominates the inference accuracy: GPT5 largely improved accuracy, while prompt variants yielded only small effects. Pre-registration of this work is available here: https://osf.io/u78td/?view_only=6b075584311f48e991c39335c840ded3; the Google Drive materials are here:https://drive.google.com/file/d/1ll8WWZDf7cCNcfNWrLViWt8GwDNSvVrp/view.
翻译:本研究通过定量评估探讨零样本大语言模型及其提示策略对图表解读任务的影响。我们要求大语言模型回答107个可视化问题,以比较智能体GPT-5与多模态GPT-4V在GPT-4V未能给出正确答案的困难图像实例上的推理准确率。结果表明,模型架构主导推理准确率:GPT-5显著提升了准确率,而不同提示变体仅产生微小影响。本研究预注册信息详见:https://osf.io/u78td/?view_only=6b075584311f48e991c39335c840ded3;谷歌云端材料详见:https://drive.google.com/file/d/1ll8WWZDf7cCNcfNWrLViWt8GwDNSvVrp/view。