Social-physical human-robot interaction (HRI) is difficult to study: building and programming robots integrating multiple interaction modalities is costly and slow, while VR-based prototypes often lack physical contact capabilities, breaking the visuo-tactile expectations of the user. We present VR2VR, a co-located dual-VR-headset platform for HRI research in which a participant and a hidden operator share the same physical space while experiencing different virtual embodiments. The participant sees an expressive virtual robot that interacts face-to-face in a shared virtual environment. In real time, the robot's upper-body movements, head and gaze behaviors, and facial expressions are mapped from the operator's tracked limbs and face signals. Since the operator is physically co-present and calibrated into the same coordinate frame, the operator can also touch the participant, enabling the participant to perceive robot touch synchronized with the visual perception of the robot's hands on their hands: the operator's finger and hand motion is mapped to the robot avatar using inverse kinematics to support precise contact. Beyond faithful motion retargeting for limb control, our VR2VR system supports social retargeting of multiple nonverbal cues, which can be experimentally varied and investigated while keeping the physical interaction constant. We detail the system design, calibration workflow, and safety considerations, and demonstrate how the platform can be used for experimentation and data collection in a touch-based Wizard-of-Oz HRI study, thus illustrating how VR2VR lowers barriers for rapidly prototyping and rigorously evaluating embodied, contact-based robot behaviors.
翻译:社交物理人机交互(HRI)的研究面临诸多挑战:构建和编程集成多模态交互的机器人成本高昂且周期漫长,而基于虚拟现实(VR)的原型系统通常缺乏物理接触能力,破坏了用户的视觉-触觉一致性预期。本文提出VR2VR——一种用于HRI研究的同地双VR头显平台,参与者与隐藏的操作员共享同一物理空间,同时体验不同的虚拟化身。参与者面对的是一个具有丰富表现力的虚拟机器人,双方在共享虚拟环境中进行面对面交互。机器人的上半身运动、头部与视线行为以及面部表情均实时映射自操作员被追踪的肢体与面部信号。由于操作员实际同处一地且已校准至同一坐标系,其可直接触碰参与者,使参与者在视觉感知机器人手掌接触的同时,同步获得触觉反馈:通过逆运动学将操作员手指与手部动作映射至机器人化身,实现精准接触。除了为肢体控制提供精确的动作重定向外,VR2VR系统还支持多种非语言线索的社交重定向,这些线索可在保持物理交互恒定的条件下进行实验性调节与研究。我们详细阐述了系统设计、校准流程与安全考量,并通过一项基于触觉的“绿野仙踪”式HRI研究,展示了该平台如何用于实验设计与数据采集,从而说明VR2VR如何降低具身化接触式机器人行为的快速原型构建与严谨评估门槛。