While humans intuitively manipulate garments and other textiles items swiftly and accurately, it is a significant challenge for robots. A factor crucial to the human performance is the ability to imagine, a priori, the intended result of the manipulation intents and hence develop predictions on the garment pose. This allows us to plan from highly obstructed states, adapt our plans as we collect more information and react swiftly to unforeseen circumstances. Robots, on the other hand, struggle to establish such intuitions and form tight links between plans and observations. This can be attributed in part to the high cost of obtaining densely labelled data for textile manipulation, both in quality and quantity. The problem of data collection is a long standing issue in data-based approaches to garment manipulation. Currently, the generation of high quality and labelled garment manipulation data is mainly attempted through advanced data capture procedures that create simplified state estimations from real-world observations. In this work, however, we propose to generate real-world observations from given object states. To achieve this, we present GARField (Garment Attached Radiance Field) a differentiable rendering architecture allowing data generation from simulated states stored as triangle meshes. Code will be available on https://ddonatien.github.io/garfield-website/
翻译:尽管人类能够直观、迅速且准确地操作服装及其他纺织品,这对机器人而言却是一项重大挑战。人类表现的关键因素在于能够先验地想象操作意图的预期结果,从而对服装姿态进行预测。这使得我们能够从高度遮挡的状态进行规划,在收集更多信息时调整计划,并对意外情况作出快速反应。相比之下,机器人难以建立此类直觉,并在规划与观测之间形成紧密关联。这部分归因于获取纺织品操作密集标注数据的高昂成本——无论是质量还是数量。数据收集问题一直是基于数据的服装操作方法中长期存在的难题。目前,高质量标注服装操作数据的生成主要通过先进的数据采集流程实现,这些流程从真实世界观测中创建简化的状态估计。然而,在本研究中,我们提出从给定物体状态生成真实世界观测。为实现这一目标,我们提出了GARField(服装附着辐射场)——一种可微分渲染架构,能够从存储为三角形网格的仿真状态生成数据。代码将在 https://ddonatien.github.io/garfield-website/ 上提供。