As social virtual reality (VR) grows more popular, addressing accessibility for blind and low vision (BLV) users is increasingly critical. Researchers have proposed an AI "sighted guide" to help users navigate VR and answer their questions, but it has not been studied with users. To address this gap, we developed a large language model (LLM)-powered guide and studied its use with 16 BLV participants in virtual environments with confederates posing as other users. We found that when alone, participants treated the guide as a tool, but treated it companionably around others, giving it nicknames, rationalizing its mistakes with its appearance, and encouraging confederate-guide interaction. Our work furthers understanding of guides as a versatile method for VR accessibility and presents design recommendations for future guides.
翻译:随着社交虚拟现实(VR)日益普及,解决盲人和低视力(BLV)用户的可访问性问题变得愈发关键。研究人员曾提出一种人工智能“视觉引导”系统,以帮助用户在VR中导航并回答其问题,但尚未进行用户研究。为填补这一空白,我们开发了一种基于大语言模型(LLM)的引导系统,并在虚拟环境中与16名BLV参与者(其中包含扮演其他用户的协同实验者)共同研究了其使用情况。研究发现,当参与者独处时,他们将该引导系统视为工具;但在有他人在场时,则会以同伴方式对待它,为其起昵称、通过其外观合理化其错误,并鼓励协同实验者与引导系统互动。本研究深化了对引导系统作为VR可访问性多功能方法的理解,并为未来引导系统的设计提出了建议。