This paper presents a correspondence-free, function-based sim-to-real learning method for controlling deformable freeform surfaces. Unlike traditional sim-to-real transfer methods that strongly rely on marker points with full correspondences, our approach simultaneously learns a deformation function space and a confidence map -- both parameterized by a neural network -- to map simulated shapes to their real-world counterparts. As a result, the sim-to-real learning can be conducted by input from either a 3D scanner as point clouds (without correspondences) or a motion capture system as marker points (tolerating missed markers). The resultant sim-to-real transfer can be seamlessly integrated into a neural network-based computational pipeline for inverse kinematics and shape control. We demonstrate the versatility and adaptability of our method on both vision devices and across four pneumatically actuated soft robots: a deformable membrane, a robotic mannequin, and two soft manipulators.
翻译:本文提出了一种无对应点、基于函数的仿真到现实学习方法,用于控制可变形自由曲面。与严重依赖具有完全对应关系的标记点的传统仿真到现实迁移方法不同,我们的方法同时学习由神经网络参数化的变形函数空间和置信度映射,将仿真形状映射到其实物对应物。因此,仿真到现实学习可以通过来自三维扫描仪的点云输入(无对应关系)或来自运动捕捉系统的标记点输入(允许标记点缺失)来进行。所得的仿真到现实迁移可以无缝集成到基于神经网络的计算流程中,用于逆运动学和形状控制。我们在视觉设备上以及四种气动驱动软体机器人上展示了我们方法的通用性和适应性:一个可变形膜、一个机器人人体模型和两个软体机械臂。