Category-level object pose and shape estimation from a single depth image has recently drawn research attention due to its wide applications in robotics and self-driving. The task is particularly challenging because the three unknowns, object pose, object shape, and model-to-measurement correspondences, are compounded together but only a single view of depth measurements is provided. The vast majority of the prior work heavily relies on data-driven approaches to obtain solutions to at least one of the unknowns and typically two, running with the risk of failing to generalize to unseen domains. The shape representations used in the prior work also mainly focus on point cloud and signed distance field (SDF). In stark contrast to the prior work, we approach the problem using an iterative estimation method that does not require learning from any pose-annotated data. In addition, we adopt a novel mesh-based object active shape model that has not been explored by the previous literature. Our algorithm, named ShapeICP, has its foundation in the iterative closest point (ICP) algorithm but is equipped with additional features for the category-level pose and shape estimation task. The results show that even without using any pose-annotated data, ShapeICP surpasses many data-driven approaches that rely on the pose data for training, opening up new solution space for researchers to consider.
翻译:从单张深度图像进行类别级物体姿态与形状估计,因其在机器人学和自动驾驶领域的广泛应用,近年来受到研究关注。该任务极具挑战性,因为物体姿态、物体形状以及模型到测量的对应关系这三个未知量相互耦合,却仅提供单视角深度测量。现有绝大多数工作严重依赖数据驱动方法来获取至少一个(通常是两个)未知量的解,存在泛化到未见域失败的风险。先前工作中使用的形状表示也主要集中在点云和符号距离场(SDF)上。与先前工作形成鲜明对比的是,我们采用一种迭代估计方法来解决该问题,该方法无需从任何姿态标注数据中学习。此外,我们采用了一种新颖的基于网格的物体主动形状模型,该模型在以往文献中尚未被探索。我们的算法命名为ShapeICP,其基础是迭代最近点(ICP)算法,但针对类别级姿态与形状估计任务配备了额外功能。结果表明,即使不使用任何姿态标注数据,ShapeICP也超越了众多依赖姿态数据进行训练的数据驱动方法,为研究者开辟了新的解决方案空间。