Visualization authoring is an iterative process requiring users to modify parameters like color schemes and data transformations to achieve desired aesthetics and effectively convey insights. Due to the complexity of these adjustments, users often create defective visualizations and require troubleshooting support. In this paper, we examine two primary approaches for visualization troubleshooting: (1) Human-assisted support via forums, where users receive advice from other individuals, and (2) AI-assisted support using large language models (LLMs). Our goal is to understand the strengths and limitations of each approach in supporting visualization troubleshooting tasks. To this end, we collected 889 Vega-Lite cases from Stack Overflow. We then conducted a comprehensive analysis to understand the types of questions users ask, the effectiveness of human and AI guidance, and the impact of supplementary resources, such as documentation and examples, on troubleshooting outcomes. Our findings reveal a striking contrast between human- and AI-assisted troubleshooting: Human-assisted troubleshooting provides tailored, context-sensitive advice but often varies in response quality, while AI-assisted troubleshooting offers rapid feedback but often requires additional contextual resources to achieve desired results.
翻译:可视化创作是一个迭代过程,需要用户调整配色方案和数据转换等参数,以实现期望的美学效果并有效传达见解。由于这些调整的复杂性,用户常会创建有缺陷的可视化作品,因而需要故障排除支持。本文研究了两种主要的可视化故障排除方法:(1) 通过论坛获得人工协助支持,即用户从其他个体处获取建议;(2) 使用大型语言模型(LLMs)获得AI辅助支持。我们的目标是理解每种方法在支持可视化故障排除任务时的优势与局限。为此,我们从Stack Overflow收集了889个Vega-Lite案例,随后进行了全面分析,以理解用户提出的问题类型、人工与AI指导的有效性,以及文档和示例等补充资源对故障排除结果的影响。我们的研究结果揭示了人工辅助与AI辅助故障排除之间的显著对比:人工辅助故障排除能提供量身定制、上下文敏感的建议,但回答质量往往参差不齐;而AI辅助故障排除虽能提供快速反馈,却通常需要额外的上下文资源才能达到预期效果。