Robotic guidance systems have shown promise in supporting blind and visually impaired (BVI) individuals with wayfinding and obstacle avoidance. However, most existing systems assume a clear path and do not support a critical aspect of navigation - environmental interactions that require manipulating objects to enable movement. These interactions are challenging for a human-robot pair because they demand (i) precise localization and manipulation of interaction targets (e.g., pressing elevator buttons) and (ii) dynamic coordination between the user's and robot's movements (e.g., pulling out a chair to sit). We present a collaborative human-robot approach that combines our robotic guide dog's precise sensing and localization capabilities with the user's ability to perform physical manipulation. The system alternates between two modes: lead mode, where the robot detects and guides the user to the target, and adaptation mode, where the robot adjusts its motion as the user interacts with the environment (e.g., opening a door). Evaluation results show that our system enables navigation that is safer, smoother, and more efficient than both a traditional white cane and a non-adaptive guiding system, with the performance gap widening as tasks demand higher precision in locating interaction targets. These findings highlight the promise of human-robot collaboration in advancing assistive technologies toward more generalizable and realistic navigation support.
翻译:机器人引导系统在支持盲人和视障人士进行寻路与避障方面展现出潜力。然而,现有系统大多预设路径畅通,未能支持导航的关键环节——需要通过操控物体才能实现移动的环境交互。这类交互对人机协作构成双重挑战:一方面需要对交互目标(如按压电梯按钮)进行精确定位与操控,另一方面要求用户与机器人的运动实现动态协调(如拉出椅子就座)。本文提出一种协作式人机交互方法,将我们研发的机器导盲犬的精确感知定位能力与用户执行物理操控的能力相结合。该系统在两种模式间动态切换:引导模式下,机器人检测并引导用户抵达目标位置;适应模式下,当用户与环境交互时(如开门),机器人实时调整自身运动。评估结果表明,相较于传统白手杖与非自适应引导系统,本系统能实现更安全、流畅、高效的导航,且当任务对交互目标定位精度要求越高时,性能优势越显著。这些发现彰显了人机协作在推动辅助技术向更具普适性与现实意义的导航支持方向发展的重要前景。