Manipulating garments and fabrics has long been a critical endeavor in the development of home-assistant robots. However, due to complex dynamics and topological structures, garment manipulations pose significant challenges. Recent successes in reinforcement learning and vision-based methods offer promising avenues for learning garment manipulation. Nevertheless, these approaches are severely constrained by current benchmarks, which offer limited diversity of tasks and unrealistic simulation behavior. Therefore, we present GarmentLab, a content-rich benchmark and realistic simulation designed for deformable object and garment manipulation. Our benchmark encompasses a diverse range of garment types, robotic systems and manipulators. The abundant tasks in the benchmark further explores of the interactions between garments, deformable objects, rigid bodies, fluids, and human body. Moreover, by incorporating multiple simulation methods such as FEM and PBD, along with our proposed sim-to-real algorithms and real-world benchmark, we aim to significantly narrow the sim-to-real gap. We evaluate state-of-the-art vision methods, reinforcement learning, and imitation learning approaches on these tasks, highlighting the challenges faced by current algorithms, notably their limited generalization capabilities. Our proposed open-source environments and comprehensive analysis show promising boost to future research in garment manipulation by unlocking the full potential of these methods. We guarantee that we will open-source our code as soon as possible. You can watch the videos in supplementary files to learn more about the details of our work. Our project page is available at: https://garmentlab.github.io/
翻译:服装与织物的操控长期以来一直是家用辅助机器人发展中的关键课题。然而,由于复杂的动力学特性和拓扑结构,服装操控带来了重大挑战。强化学习和基于视觉的方法近年取得的成功为学习服装操控提供了有前景的途径。尽管如此,这些方法受到当前基准测试的严重制约,这些基准测试任务多样性有限且仿真行为不真实。为此,我们提出了GarmentLab——一个为可变形物体及服装操控设计的、内容丰富的基准测试与高真实度仿真平台。我们的基准测试涵盖了多样化的服装类型、机器人系统与操作器。基准中丰富的任务进一步探索了服装、可变形物体、刚体、流体及人体之间的交互作用。此外,通过整合多种仿真方法(如FEM和PBD),并结合我们提出的仿真到现实算法及真实世界基准测试,我们旨在显著缩小仿真与现实之间的差距。我们在这些任务上评估了最先进的视觉方法、强化学习及模仿学习方法,揭示了当前算法面临的挑战,特别是其泛化能力有限的问题。我们提出的开源环境与全面分析表明,通过充分释放这些方法的潜力,有望显著推动未来服装操控研究的发展。我们承诺将尽快开源代码。您可以通过补充文件中的视频了解我们工作的更多细节。项目页面位于:https://garmentlab.github.io/