Deep learning (DL)-based automated cybersickness detection methods, along with adaptive mitigation techniques, can enhance user comfort and interaction. However, recent studies show that these DL-based systems are susceptible to adversarial attacks; small perturbations to sensor inputs can degrade model performance, trigger incorrect mitigation, and disrupt the user's immersive experience (UIX). Additionally, there is a lack of dedicated open-source testbeds that evaluate the robustness of these systems under adversarial conditions, limiting the ability to assess their real-world effectiveness. To address this gap, this paper introduces Adversarial-VR, a novel real-time VR testbed for evaluating DL-based cybersickness detection and mitigation strategies under adversarial conditions. Developed in Unity, the testbed integrates two state-of-the-art (SOTA) DL models: DeepTCN and Transformer, which are trained on the open-source MazeSick dataset, for real-time cybersickness severity detection and applies a dynamic visual tunneling mechanism that adjusts the field-of-view based on model outputs. To assess robustness, we incorporate three SOTA adversarial attacks: MI-FGSM, PGD, and C&W, which successfully prevent cybersickness mitigation by fooling DL-based cybersickness models' outcomes. We implement these attacks using a testbed with a custom-built VR Maze simulation and an HTC Vive Pro Eye headset, and we open-source our implementation for widespread adoption by VR developers and researchers. Results show that these adversarial attacks are capable of successfully fooling the system. For instance, the C&W attack results in a $5.94x decrease in accuracy for the Transformer-based cybersickness model compared to the accuracy without the attack.
翻译:基于深度学习(DL)的自动化晕动症检测方法结合自适应缓解技术,能够提升用户舒适度与交互体验。然而,近期研究表明,这些基于DL的系统易受对抗性攻击影响;传感器输入的微小扰动即可导致模型性能下降、触发错误缓解措施,并破坏用户的沉浸式体验(UIX)。此外,目前缺乏专门用于评估此类系统在对抗条件下鲁棒性的开源测试平台,限制了对其实际有效性的评估能力。为填补这一空白,本文提出Adversarial-VR——一个用于在对抗条件下评估基于DL的晕动症检测与缓解策略的新型实时VR测试平台。该测试平台基于Unity开发,集成了两种前沿(SOTA)DL模型:DeepTCN与Transformer(使用开源数据集MazeSick训练),用于实时检测晕动症严重程度,并采用基于模型输出动态调整视野范围的视觉隧道机制。为评估鲁棒性,我们整合了三种SOTA对抗攻击方法:MI-FGSM、PGD与C&W,这些攻击能通过欺骗基于DL的晕动症模型输出,成功阻止晕动症缓解措施。我们在配备定制VR迷宫模拟场景与HTC Vive Pro Eye头显的测试平台上实现了这些攻击,并将实现代码开源以供VR开发者与研究人员广泛采用。实验结果表明,这些对抗攻击能成功欺骗系统。例如,C&W攻击使基于Transformer的晕动症模型准确率较无攻击时下降$5.94x。