This study explores integrating sign language into virtual reality (VR) by examining the comprehensibility and user experience of viewing American Sign Language (ASL) videos captured with body-mounted 360-degree cameras. Ten participants identified ASL signs from videos recorded at three body-mounted positions: head, shoulder, and chest. Results showed the shoulder-mounted camera achieved the highest accuracy (85%), though differences between positions were not statistically significant. Participants noted that peripheral distortion in 360-degree videos impacted clarity, highlighting areas for improvement. Despite challenges, the overall comprehension success rate of 83.3% demonstrates the potential of video-based ASL communication in VR. Feedback emphasized the need to refine camera angles, reduce distortion, and explore alternative mounting positions. Participants expressed a preference for signing over text-based communication in VR, highlighting the importance of developing this approach to enhance accessibility and collaboration for Deaf and Hard of Hearing (DHH) users in virtual environments.
翻译:本研究通过考察用户观看身体佩戴式360度摄像头所采集的美国手语(ASL)视频的可理解性与用户体验,探索将手语融入虚拟现实(VR)的可行性。十名参与者从头部、肩部和胸部三个身体佩戴位置录制的视频中识别ASL手势。结果显示肩部佩戴摄像头获得了最高准确率(85%),但不同位置间的差异未达到统计显著性。参与者指出360度视频的边缘畸变影响了清晰度,这指明了需要改进的方向。尽管存在挑战,整体83.3%的理解成功率证明了基于视频的ASL交流在VR中的潜力。反馈意见强调需要优化摄像头角度、减少畸变并探索其他佩戴位置。参与者表示在VR中更倾向于手语交流而非文本交流,这凸显了发展该方法以提升听障用户在虚拟环境中的可访问性与协作能力的重要性。