Turning garments right-side out is a challenging manipulation task: it is highly dynamic, entails rapid contact changes, and is subject to severe visual occlusion. We introduce Right-Side-Out, a zero-shot sim-to-real framework that effectively solves this challenge by exploiting task structures. We decompose the task into Drag/Fling to create and stabilize an access opening, followed by Insert&Pull to invert the garment. Each step uses a depth-inferred, keypoint-parameterized bimanual primitive that sharply reduces the action space while preserving robustness. Efficient data generation is enabled by our custom-built, high-fidelity, GPU-parallel Material Point Method (MPM) simulator that models thin-shell deformation and provides robust and efficient contact handling for batched rollouts. Built on the simulator, our fully automated pipeline scales data generation by randomizing garment geometry, material parameters, and viewpoints, producing depth, masks, and per-primitive keypoint labels without any human annotations. With a single depth camera, policies trained entirely in simulation deploy zero-shot on real hardware, achieving up to 81.3% success rate. By employing task decomposition and high fidelity simulation, our framework enables tackling highly dynamic, severely occluded tasks without laborious human demonstrations.
翻译:将衣物由内向外翻转是一项极具挑战性的操作任务:它具有高度动态性、涉及快速的接触变化,并且受到严重的视觉遮挡。我们提出了Right-Side-Out,一个利用任务结构有效解决这一挑战的零样本仿真到现实框架。我们将任务分解为拖拽/甩动以创建并稳定一个操作开口,随后进行插入与拉动以翻转衣物。每个步骤都使用一个基于深度推断、以关键点参数化的双手操作基元,该基元在保持鲁棒性的同时显著缩小了动作空间。高效的数据生成得益于我们定制构建的高保真、GPU并行的物质点法模拟器,该模拟器对薄壳变形进行建模,并为批量推演提供了鲁棒且高效的接触处理。基于该模拟器,我们的全自动流水线通过随机化衣物几何形状、材料参数和视点来扩展数据生成,无需任何人工标注即可生成深度图、掩码以及每个操作基元的关键点标签。仅使用单个深度相机,完全在仿真中训练的策略即可零样本部署到真实硬件上,成功率高达81.3%。通过采用任务分解和高保真仿真,我们的框架使得攻克高度动态、严重遮挡的任务成为可能,而无需耗费大量人力进行演示。