Category-level 3D pose estimation is a fundamentally important problem in computer vision and robotics, e.g. for embodied agents or to train 3D generative models. However, so far methods that estimate the category-level object pose require either large amounts of human annotations, CAD models or input from RGB-D sensors. In contrast, we tackle the problem of learning to estimate the category-level 3D pose only from casually taken object-centric videos without human supervision. We propose a two-step pipeline: First, we introduce a multi-view alignment procedure that determines canonical camera poses across videos with a novel and robust cyclic distance formulation for geometric and appearance matching using reconstructed coarse meshes and DINOv2 features. In a second step, the canonical poses and reconstructed meshes enable us to train a model for 3D pose estimation from a single image. In particular, our model learns to estimate dense correspondences between images and a prototypical 3D template by predicting, for each pixel in a 2D image, a feature vector of the corresponding vertex in the template mesh. We demonstrate that our method outperforms all baselines at the unsupervised alignment of object-centric videos by a large margin and provides faithful and robust predictions in-the-wild. Our code and data is available at https://github.com/GenIntel/uns-obj-pose3d.
翻译:类别级三维位姿估计是计算机视觉与机器人学中的一个基础且重要的问题,例如对于具身智能体或训练三维生成模型。然而,目前估计类别级物体位姿的方法通常需要大量人工标注、CAD模型或RGB-D传感器的输入。相比之下,我们致力于仅从无人工监督、随意拍摄的以物体为中心的视频中学习估计类别级三维位姿。我们提出了一个两步流程:首先,我们引入一种多视图对齐方法,该方法通过一种新颖且鲁棒的循环距离公式来确定跨视频的规范相机位姿,该公式利用重建的粗糙网格和DINOv2特征进行几何与外观匹配。第二步,这些规范位姿和重建的网格使我们能够训练一个从单张图像进行三维位姿估计的模型。具体而言,我们的模型通过学习预测二维图像中每个像素在模板网格中对应顶点的特征向量,来估计图像与原型三维模板之间的密集对应关系。我们证明,我们的方法在以物体为中心的视频无监督对齐任务上大幅优于所有基线方法,并在真实场景中提供了忠实且鲁棒的预测。我们的代码和数据可在 https://github.com/GenIntel/uns-obj-pose3d 获取。