Visualization dashboards are regularly used for data exploration and analysis, but their complex interactions and interlinked views often require time-consuming onboarding sessions from dashboard authors. Preparing these onboarding materials is labor-intensive and requires manual updates when dashboards change. Recent advances in multimodal interaction powered by large language models (LLMs) provide ways to support self-guided onboarding. We present DIANA (Dashboard Interactive Assistant for Navigation and Analysis), a multimodal dashboard assistant that helps users for navigation and guided analysis through chat, audio, and mouse-based interactions. Users can choose any interaction modality or a combination of them to onboard themselves on the dashboard. Each modality highlights relevant dashboard features to support user orientation. Unlike typical LLM systems that rely solely on text-based chat, DIANA combines multiple modalities to provide explanations directly in the dashboard interface. We conducted a qualitative user study to understand the use of different modalities for different types of onboarding tasks and their complexities.
翻译:可视化仪表板常用于数据探索与分析,但其复杂的交互与相互关联的视图往往需要仪表板作者进行耗时的入门指导。准备这些入门材料劳动密集,且当仪表板变更时需要手动更新。由大型语言模型(LLMs)驱动的多模态交互技术的最新进展,为支持自主式入门提供了途径。我们提出了DIANA(用于导航与分析的仪表板交互助手),这是一个多模态仪表板助手,通过聊天、音频及基于鼠标的交互来帮助用户进行导航与引导式分析。用户可选择任意一种交互模态或其组合来自行熟悉仪表板。每种模态均会高亮显示相关的仪表板功能以辅助用户定位。与仅依赖基于文本聊天的典型LLM系统不同,DIANA结合了多种模态,直接在仪表板界面中提供解释说明。我们开展了一项定性用户研究,以理解不同模态在不同类型入门任务及其复杂度下的使用情况。