Gaze interaction presents a promising avenue in Virtual Reality (VR) due to its intuitive and efficient user experience. Yet, the depth control inherent in our visual system remains underutilized in current methods. In this study, we introduce FocusFlow, a hands-free interaction method that capitalizes on human visual depth perception within the 3D scenes of Virtual Reality. We first develop a binocular visual depth detection algorithm to understand eye input characteristics. We then propose a layer-based user interface and introduce the concept of 'Virtual Window' that offers an intuitive and robust gaze-depth VR interaction, despite the constraints of visual depth accuracy and precision spatially at further distances. Finally, to help novice users actively manipulate their visual depth, we propose two learning strategies that use different visual cues to help users master visual depth control. Our user studies on 24 participants demonstrate the usability of our proposed virtual window concept as a gaze-depth interaction method. In addition, our findings reveal that the user experience can be enhanced through an effective learning process with adaptive visual cues, helping users to develop muscle memory for this brand-new input mechanism. We conclude the paper by discussing strategies to optimize learning and potential research topics of gaze-depth interaction.
翻译:注视交互因其直观高效的用户体验,在虚拟现实领域展现出广阔前景。然而,当前方法尚未充分利用人类视觉系统固有的深度控制能力。本研究提出FocusFlow——一种利用人类视觉深度感知能力在虚拟现实三维场景中实现免手持交互的方法。我们首先开发了双眼视觉深度检测算法以理解眼部输入特征,随后提出基于图层的用户界面并引入"虚拟视窗"概念——该概念能够在克服视觉深度在远距离空间精确性限制的前提下,实现直观稳健的注视深度虚拟现实交互。最后,为协助新手用户主动操控视觉深度,我们提出两种采用不同视觉线索的学习策略,帮助用户掌握视觉深度控制。针对24名参与者的用户研究验证了所提出的虚拟视窗概念作为注视深度交互方法的可用性。此外,研究结果表明,通过有效学习过程配合自适应视觉线索,可提升用户体验,帮助用户形成针对这一全新输入机制的肌肉记忆。本文最后讨论了优化学习策略及注视深度交互的潜在研究方向。