Research in ML4VIS investigates how to use machine learning (ML) techniques to generate visualizations, and the field is rapidly growing with high societal impact. However, as with any computational pipeline that employs ML processes, ML4VIS approaches are susceptible to a range of ML-specific adversarial attacks. These attacks can manipulate visualization generations, causing analysts to be tricked and their judgments to be impaired. Due to a lack of synthesis from both visualization and ML perspectives, this security aspect is largely overlooked by the current ML4VIS literature. To bridge this gap, we investigate the potential vulnerabilities of ML-aided visualizations from adversarial attacks using a holistic lens of both visualization and ML perspectives. We first identify the attack surface (i.e., attack entry points) that is unique in ML-aided visualizations. We then exemplify five different adversarial attacks. These examples highlight the range of possible attacks when considering the attack surface and multiple different adversary capabilities. Our results show that adversaries can induce various attacks, such as creating arbitrary and deceptive visualizations, by systematically identifying input attributes that are influential in ML inferences. Based on our observations of the attack surface characteristics and the attack examples, we underline the importance of comprehensive studies of security issues and defense mechanisms as a call of urgency for the ML4VIS community.
翻译:ML4VIS研究探讨如何利用机器学习技术生成可视化图表,该领域正以高社会影响力快速发展。然而,与任何采用机器学习流程的计算管道类似,ML4VIS方法易受一系列机器学习特有的对抗攻击。此类攻击可操纵可视化生成过程,导致分析人员被误导且判断能力受损。由于缺乏从可视化和机器学习双重视角的系统性研究,当前ML4VIS文献在很大程度上忽视了这一安全维度。为填补此空白,我们通过融合可视化与机器学习的整体视角,系统探究机器学习辅助可视化在对抗攻击下的潜在脆弱性。首先,我们识别了机器学习辅助可视化特有的攻击面(即攻击切入点)。随后通过五个不同的对抗攻击案例进行实证说明,这些案例揭示了在考虑攻击面与多种攻击者能力时可能存在的攻击范围。研究结果表明,攻击者可通过系统识别对机器学习推理具有影响力的输入属性,实现多种攻击效果,例如创建任意且具有欺骗性的可视化图表。基于对攻击面特征与攻击案例的观察,我们强调全面研究安全议题与防御机制的重要性,以此向ML4VIS学界发出紧迫性倡议。