Social-physical human-robot interaction (spHRI) is difficult to study: building and programming robots that integrate multiple interaction modalities is costly and slow, while VR-based prototypes often lack physical contact, breaking users' visuo-tactile expectations. We present XR$^3$, a co-located dual-VR-headset platform for HRI research in which an attendee and a hidden operator share the same physical space while experiencing different virtual embodiments. The attendee sees an expressive virtual robot that interacts face-to-face in a shared virtual environment. In real time, the robot's upper-body motion, head and gaze behavior, and facial expressions are mapped from the operator's tracked limbs and face signals. Because the operator is co-present and calibrated in the same coordinate frame, the operator can also touch the attendee, enabling perceived robot touch synchronized with the robot's visible hands. Finger and hand motion is mapped to the robot avatar using inverse kinematics to support precise contact. Beyond motion retargeting, XR$^3$ supports social retargeting of multiple nonverbal cues that can be experimentally varied while keeping physical interaction constant. We detail the system design and calibration, and demonstrate the platform in a touch-based Wizard-of-Oz study, lowering the barrier to prototyping and evaluating embodied, contact-based robot behaviors.
翻译:社会-物理人机交互(spHRI)的研究面临挑战:构建和编程能够整合多种交互模态的机器人成本高昂且进展缓慢,而基于VR的原型系统通常缺乏物理接触,破坏了用户的视觉-触觉预期。我们提出了XR$^3$,一个用于人机交互研究的共置双VR头显平台,其中参与者和隐藏的操作员共享同一物理空间,同时体验不同的虚拟化身。参与者面对的是一个在共享虚拟环境中进行面对面交互、富有表现力的虚拟机器人。机器人的上半身运动、头部与视线行为以及面部表情均实时映射自操作员被追踪的肢体和面部信号。由于操作员共处同一空间且校准于同一坐标系中,操作员还可以触摸参与者,从而实现与机器人可见手部同步的感知式机器人触摸。手指和手部运动通过逆运动学映射到机器人虚拟化身,以支持精确接触。除了运动重定向外,XR$^3$还支持对多种非语言线索进行社会重定向,这些线索可在保持物理交互不变的情况下进行实验性调整。我们详细阐述了系统设计与校准过程,并通过一项基于触摸的“绿野仙踪”式研究展示了该平台,从而降低了具身化、基于接触的机器人行为原型设计与评估的门槛。