Currently, manipulation tasks for deformable objects often focus on activities like folding clothes, handling ropes, and manipulating bags. However, research on contact-rich tasks involving deformable objects remains relatively underdeveloped. When humans use cloth or sponges to wipe surfaces, they rely on both vision and tactile feedback. Yet, current algorithms still face challenges with issues like occlusion, while research on tactile perception for manipulation is still evolving. Tasks such as covering surfaces with deformable objects demand not only perception but also precise robotic manipulation. To address this, we propose a method that leverages efficient and accessible simulators for task execution. Specifically, we train a reinforcement learning agent in a simulator to manipulate deformable objects for surface wiping tasks. We simplify the state representation of object surfaces using harmonic UV mapping, process contact feedback from the simulator on 2D feature maps, and use scaled grouped convolutions (SGCNN) to extract features efficiently. The agent then outputs actions in a reduced-dimensional action space to generate coverage paths. Experiments demonstrate that our method outperforms previous approaches in key metrics, including total path length and coverage area. We deploy these paths on a Kinova Gen3 manipulator to perform wiping experiments on the back of a torso model, validating the feasibility of our approach.
翻译:目前,针对可变形物体的操作任务通常集中于折叠衣物、处理绳索和操作袋子等活动。然而,涉及可变形物体的密集接触任务研究仍相对不足。当人类使用布料或海绵擦拭表面时,他们同时依赖视觉和触觉反馈。然而,现有算法在处理遮挡等问题时仍面临挑战,而用于操作的触觉感知研究尚处于发展阶段。使用可变形物体覆盖表面的任务不仅需要感知能力,还需要精确的机器人操作。为此,我们提出一种利用高效易用的模拟器执行任务的方法。具体而言,我们在模拟器中训练强化学习智能体来操作可变形物体完成表面擦拭任务。我们使用调和UV映射简化物体表面的状态表示,在二维特征图上处理来自模拟器的接触反馈,并采用缩放分组卷积(SGCNN)高效提取特征。随后,智能体在降维动作空间中输出动作以生成覆盖路径。实验表明,我们的方法在总路径长度和覆盖面积等关键指标上优于先前方法。我们将这些路径部署在Kinova Gen3机械臂上,在躯干模型背部进行擦拭实验,验证了所提方法的可行性。