Traditional authentication methods, such as passwords and biometrics, verify a user's identity only at the start of a session, leaving systems vulnerable to session hijacking. Continuous authentication, however, ensures ongoing verification by monitoring user behavior. This study investigates the long-term feasibility of eye-tracking as a behavioral biometric for continuous authentication in virtual reality (VR) environments, using data from the GazebaseVR dataset. Our approach evaluates three architectures, Transformer Encoder, DenseNet, and XGBoost, on short and long-term data to determine their efficacy in user identification tasks. Initial results indicate that both Transformer Encoder and DenseNet models achieve high accuracy rates of up to 97% in short-term settings, effectively capturing unique gaze patterns. However, when tested on data collected 26 months later, model accuracy declined significantly, with rates as low as 1.78% for some tasks. To address this, we propose periodic model updates incorporating recent data, restoring accuracy to over 95%. These findings highlight the adaptability required for gaze-based continuous authentication systems and underscore the need for model retraining to manage evolving user behavior. Our study provides insights into the efficacy and limitations of eye-tracking as a biometric for VR authentication, paving the way for adaptive, secure VR user experiences.
翻译:传统身份验证方法(如密码和生物特征识别)仅在会话开始时验证用户身份,使系统易受会话劫持攻击。而持续身份验证则通过持续监测用户行为来确保持续验证。本研究利用GazebaseVR数据集,探讨了眼动追踪作为行为生物特征在虚拟现实环境中用于持续身份验证的长期可行性。我们的方法评估了三种架构——Transformer Encoder、DenseNet和XGBoost——在短期和长期数据上的表现,以确定它们在用户识别任务中的有效性。初步结果表明,在短期设置中,Transformer Encoder和DenseNet模型均能达到高达97%的准确率,有效捕捉了独特的注视模式。然而,当使用26个月后收集的数据进行测试时,模型准确率显著下降,某些任务的准确率低至1.78%。为解决此问题,我们提出定期更新模型以纳入近期数据,从而将准确率恢复至95%以上。这些发现凸显了基于注视的持续身份验证系统所需的适应性,并强调了模型再训练对于管理不断演变的用户行为的必要性。我们的研究深入探讨了眼动追踪作为VR身份验证生物特征的有效性和局限性,为构建自适应、安全的VR用户体验铺平了道路。