Prolonged exposure to virtual reality (VR) systems leads to visual fatigue, impairs user comfort, performance, and safety, particularly in high-stakes or long-duration applications. Existing fatigue detection approaches rely on subjective questionnaires or intrusive physiological signals, such as EEG, heart rate, or eye-blink count, which limit their scalability and real-time applicability. This paper introduces a deep learning-based study for detecting visual fatigue using continuous eye-gaze trajectories recorded in VR. We use the GazeBaseVR dataset comprising binocular eye-tracking data from 407 participants across five immersive tasks, extract cyclopean eye-gaze angles, and evaluate six deep classifiers. Our results demonstrate that EKYT achieves up to 94% accuracy, particularly in tasks demanding high visual attention, such as video viewing and text reading. We further analyze gaze variance and subjective fatigue measures, indicating significant behavioral differences between fatigued and non-fatigued conditions. These findings establish eye-gaze dynamics as a reliable and nonintrusive modality for continuous fatigue detection in immersive VR, offering practical implications for adaptive human-computer interactions.
翻译:长时间暴露于虚拟现实(VR)系统会导致视觉疲劳,损害用户的舒适度、表现和安全性,尤其是在高风险或长时间的应用场景中。现有的疲劳检测方法依赖于主观问卷或侵入性的生理信号,如脑电图、心率或眨眼次数,这限制了其可扩展性和实时适用性。本文介绍了一项基于深度学习的研究,利用在VR中记录的连续眼动轨迹来检测视觉疲劳。我们使用了GazeBaseVR数据集,该数据集包含407名参与者在五项沉浸式任务中的双眼眼动追踪数据,提取了独眼眼动角度,并评估了六种深度分类器。我们的结果表明,EKYT实现了高达94%的准确率,尤其是在需要高度视觉注意力的任务中,如视频观看和文本阅读。我们进一步分析了眼动方差和主观疲劳测量指标,揭示了疲劳与非疲劳状态之间显著的行为差异。这些发现确立了眼动动力学作为一种可靠且非侵入性的模态,可用于沉浸式VR中的连续疲劳检测,为自适应人机交互提供了实际应用价值。