Hybrid rigid-soft robots combine the precision of rigid manipulators with the compliance and adaptability of soft arms, offering a promising approach for versatile grasping in unstructured environments. However, coordinating hybrid robots remains challenging, due to difficulties in modeling, perception, and cross-domain kinematics. In this work, we present a novel augmented reality (AR)-based physical human-robot interaction framework that enables direct teleoperation of a hybrid rigid-soft robot for simple reaching and grasping tasks. Using an AR headset, users can interact with a simulated model of the robotic system integrated into a general-purpose physics engine, which is superimposed on the real system, allowing simulated execution prior to real-world deployment. To ensure consistent behavior between the virtual and physical robots, we introduce a real-to-simulation parameter identification pipeline that leverages the inherent geometric properties of the soft robot, enabling accurate modeling of its static and dynamic behavior as well as the control system's response.
翻译:混合刚柔机器人结合了刚性机械臂的精确性与柔性臂的顺应性和适应性,为在非结构化环境中实现通用抓取提供了一种前景广阔的方法。然而,由于建模、感知和跨域运动学方面的困难,协调混合机器人仍然具有挑战性。在这项工作中,我们提出了一种新颖的基于增强现实(AR)的物理人机交互框架,该框架能够直接遥操作混合刚柔机器人完成简单的到达与抓取任务。用户通过AR头显,可以与集成到通用物理引擎中的机器人系统仿真模型进行交互,该模型叠加在真实系统之上,允许在实际部署前进行仿真执行。为确保虚拟机器人与物理机器人行为一致,我们引入了一个从真实到仿真的参数辨识流程,该流程利用柔性机器人固有的几何特性,从而能够精确建模其静态和动态行为以及控制系统的响应。