Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.
翻译:大型语言模型(LLM)已被应用于多种可视化任务,但我们距离能够预测人类要点的感知感知型LLM还有多远?图形感知研究表明,人类对图表要点的理解对可视化设计选择(如空间布局)十分敏感。本研究以具有不同空间布局的条形图为案例,探究LLM在生成要点时是否表现出类似的敏感性。我们开展了三项实验,测试了四种常见条形图布局:垂直并列、水平并列、叠加和堆叠。在实验1中,我们通过测试四种LLM、两种温度设置、九种图表规格和两种提示策略,确定了生成有意义图表要点的最优配置。研究发现,即使是最先进的LLM也难以生成语义多样且事实准确的要点。在实验2中,我们使用最优配置为零样本和单样本设置下的四个布局、两个数据集共八个可视化图表各生成了30个要点。与人类要点相比,LLM生成的要点往往与人类所做的比较类型不匹配。在实验3中,我们考察了图表上下文和数据对LLM要点的影响。研究发现,与人类不同,LLM在使用相同条形布局的不同条形图中表现出要点比较类型的差异。总体而言,本案例研究评估了LLM模拟人类数据解读的能力,并指出了使用LLM预测人类图表要点所面临的挑战与机遇。