The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.
翻译:过去十年来,AI赋能的技术的进步加速了可视化的创建与自动化。然而,以描述性和生成性格式呈现可视化仍然是一个挑战。此外,当前的可视化嵌入方法专注于独立可视化,忽视了多视图可视化中上下文信息的重要性。为解决此问题,我们提出一种新的表示模型Chart2Vec,用于学习带有上下文感知信息的可视化通用嵌入。Chart2Vec旨在支持广泛的后续可视化任务,如推荐和故事叙述。我们的模型同时考虑了声明式规范中可视化的结构信息和语义信息。为增强上下文感知能力,Chart2Vec在涉及可视化共现的有监督和无监督任务上采用多任务学习。我们通过消融研究、用户研究和定量比较评估了该方法。结果验证了我们的嵌入方法在人类认知上的一致性,并显示出其相较于现有方法的优势。