We investigate how vibrotactile wrist feedback can enhance spatial guidance for handheld tool movement in optical see-through augmented reality (AR). While AR overlays are widely used to support surgical tasks, visual occlusion, lighting conditions, and interface ambiguity can compromise precision and confidence. To address these challenges, we designed a multimodal system combining AR visuals with a custom wrist-worn haptic device delivering directional and state-based cues. A formative study with experienced surgeons and residents identified key tool maneuvers and preferences for reference mappings, guiding our cue design. In a cue identification experiment (N=21), participants accurately recognized five vibration patterns under visual load, with higher recognition for full-actuator states than spatial direction cues. In a guidance task (N=27), participants using both AR and haptics achieved significantly higher spatial precision (5.8 mm) and usability (SUS = 88.1) than those using either modality alone, despite having modest increases in task time. Participants reported that haptic cues provided reassuring confirmation and reduced cognitive effort during alignment. Our results highlight the promise of integrating wrist-based haptics into AR systems for high-precision, visually complex tasks such as surgical guidance. We discuss design implications for multimodal interfaces supporting confident, efficient tool manipulation.
翻译:本研究探讨了振动触觉腕部反馈如何增强光学透视增强现实(AR)环境中手持工具空间引导的效能。尽管AR叠加显示已广泛应用于辅助外科手术任务,但视觉遮挡、光照条件及界面模糊性可能影响操作精度与操作者信心。为解决这些挑战,我们设计了一个多模态系统,将AR视觉显示与定制化腕戴式触觉设备相结合,该设备可提供基于方向和状态的信息提示。通过对经验丰富的外科医生及住院医师开展形成性研究,我们识别出关键工具操作方式及参考映射偏好,从而指导提示信息设计。在提示识别实验(N=21)中,参与者在视觉负荷条件下能准确识别五种振动模式,其中全执行器状态提示的识别率高于空间方向提示。在引导任务实验(N=27)中,同时使用AR与触觉反馈的参与者相较于单独使用任一模态的参与者,在空间精度(5.8毫米)和系统可用性(SUS评分=88.1)方面均取得显著提升,尽管任务完成时间略有增加。参与者反馈表明触觉提示能提供可靠的确认信息,并在对齐过程中降低认知负荷。我们的研究结果凸显了将腕部触觉技术整合到AR系统中,用于外科手术引导等高精度、视觉复杂任务的广阔前景。本文进一步讨论了支持高效可靠工具操作的多模态界面设计启示。