Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines--DracoGPT-Rank and DracoGPT-Recommend--to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.
翻译:大型语言模型(LLM)经过海量语料训练,具备编码可视化设计知识与最佳实践的潜力。然而,若未能有效编码,其提供的可视化建议可能不可靠。那么,LLM 究竟习得了何种可视化设计偏好?本文提出 DracoGPT——一种从 LLM 中提取、建模与评估可视化设计偏好的方法。为评估多样化任务,我们开发了 DracoGPT-Rank 与 DracoGPT-Recommend 两条流程管道,分别对要求排序或推荐视觉编码规范的 LLM 进行建模。我们以 Draco 作为共享知识库来表征 LLM 的设计偏好,并将其与实证研究中的最佳实践进行对比。实验表明,DracoGPT 能够准确建模 LLM 所表达的偏好,从而支持基于 Draco 设计约束的分析。在一系列基础 LLM 的测试中,我们发现 DracoGPT-Rank 与 DracoGPT-Recommend 两者间存在中等程度的一致性,但均与基于人类被试实验得出的设计准则存在显著差异。未来工作可基于本方法扩展 Draco 知识库,以建模更丰富的偏好集合,并为 LLM 提供稳健且经济高效的替代方案。