Visualization authoring is an iterative process requiring users to adjust parameters to achieve desired aesthetics. Due to its complexity, users often create defective visualizations and struggle to fix them. Many seek help on forums (e.g., Stack Overflow), while others turn to AI, yet little is known about the strengths and limitations of these approaches, or how they can be effectively combined. We analyze Vega-Lite debugging cases from Stack Overflow, categorizing question types by askers, evaluating human responses, and assessing AI performance. Guided by these findings, we design a human-AI co-debugging system that combines LLM-generated suggestions with forum knowledge. We evaluated this system in a user study on 36 unresolved problems, comparing it with forum answers and LLM baselines. Our results show that while forum contributors provide accurate but slow solutions and LLMs offer immediate but sometimes misaligned guidance, the hybrid system resolves 86% of cases, higher than either alone.
翻译:可视化创作是一个迭代过程,需要用户调整参数以实现期望的美学效果。由于其复杂性,用户常会创建出存在缺陷的可视化作品,并难以修复。许多人会在论坛(如Stack Overflow)上寻求帮助,另一些人则转向AI,但我们对这些方法的优势与局限,以及如何有效结合它们仍知之甚少。我们分析了Stack Overflow上的Vega-Lite调试案例,根据提问者类型对问题进行分类,评估了人类回复的质量,并检验了AI的表现。基于这些发现,我们设计了一个人类-AI协同调试系统,该系统将LLM生成的建议与论坛知识相结合。我们通过一项用户研究对该系统进行了评估,研究涉及36个未解决的问题,并将其与论坛答案及LLM基线方法进行了比较。我们的结果表明:虽然论坛贡献者能提供准确但缓慢的解决方案,而LLM能提供即时但有时存在偏差的指导,但混合系统解决了86%的案例,其成功率高于任一单独方法。