Visualization dashboards are regularly used for data exploration and analysis, but their complex interactions and interlinked views often require time-consuming onboarding sessions from dashboard authors. Preparing these onboarding materials is labor-intensive and requires manual updates when dashboards change. Recent advances in multimodal interaction powered by large language models (LLMs) provide ways to support self-guided onboarding. We present DIANA (Dashboard Interactive Assistant for Navigation and Analysis), a multimodal dashboard assistant that helps users for navigation and guided analysis through chat, audio, and mouse-based interactions. Users can choose any interaction modality or a combination of them to onboard themselves on the dashboard. Each modality highlights relevant dashboard features to support user orientation. Unlike typical LLM systems that rely solely on text-based chat, DIANA combines multiple modalities to provide explanations directly in the dashboard interface. We conducted a qualitative user study to understand the use of different modalities for different types of onboarding tasks and their complexities.
翻译:可视化仪表板常用于数据探索与分析,但其复杂的交互操作与关联视图往往需要仪表板作者耗费大量时间进行入门指导。准备这些入门材料不仅劳动密集,且当仪表板更新时需手动调整。近期由大语言模型(LLMs)驱动的多模态交互技术为支持自主式入门提供了新途径。本文提出DIANA(面向导航与分析的仪表板交互助手),这是一个多模态仪表板助手,通过聊天、语音及基于鼠标的交互方式协助用户进行导航与引导式分析。用户可选择任意一种或组合多种交互模态来自主熟悉仪表板功能。每种模态均会高亮显示相关仪表板特性以辅助用户定位。与仅依赖文本聊天功能的典型LLM系统不同,DIANA融合多种模态,直接在仪表板界面中提供解释说明。我们通过定性用户研究,探讨了不同模态在各类入门任务及其复杂度情境下的使用效果。