Learning neural implicit fields of 3D shapes is a rapidly emerging field that enables shape representation at arbitrary resolutions. Due to the flexibility, neural implicit fields have succeeded in many research areas, including shape reconstruction, novel view image synthesis, and more recently, object pose estimation. Neural implicit fields enable learning dense correspondences between the camera space and the object's canonical space-including unobserved regions in camera space-significantly boosting object pose estimation performance in challenging scenarios like highly occluded objects and novel shapes. Despite progress, predicting canonical coordinates for unobserved camera-space regions remains challenging due to the lack of direct observational signals. This necessitates heavy reliance on the model's generalization ability, resulting in high uncertainty. Consequently, densely sampling points across the entire camera space may yield inaccurate estimations that hinder the learning process and compromise performance. To alleviate this problem, we propose a method combining an SO(3)-equivariant convolutional implicit network and a positive-incentive point sampling (PIPS) strategy. The SO(3)-equivariant convolutional implicit network estimates point-level attributes with SO(3)-equivariance at arbitrary query locations, demonstrating superior performance compared to most existing baselines. The PIPS strategy dynamically determines sampling locations based on the input, thereby boosting the network's accuracy and training efficiency. Our method outperforms the state-of-the-art on three pose estimation datasets. Notably, it demonstrates significant improvements in challenging scenarios, such as objects captured with unseen pose, high occlusion, novel geometry, and severe noise.
翻译:学习三维形状的神经隐式场是一个快速兴起的领域,它能够实现任意分辨率下的形状表示。凭借其灵活性,神经隐式场已在许多研究领域取得成功,包括形状重建、新视角图像合成,以及最近的物体姿态估计。神经隐式场能够学习相机空间与物体规范空间之间的密集对应关系——包括相机空间中未观测到的区域——从而在极具挑战性的场景(如高度遮挡的物体和未见过的形状)中显著提升物体姿态估计的性能。尽管取得了进展,但由于缺乏直接的观测信号,为相机空间中未观测区域预测规范坐标仍然具有挑战性。这导致模型严重依赖其泛化能力,从而产生高度的不确定性。因此,在整个相机空间密集采样点可能会产生不准确的估计,从而阻碍学习过程并损害性能。为了缓解这个问题,我们提出了一种结合SO(3)-等变卷积隐式网络和正向激励点采样策略的方法。SO(3)-等变卷积隐式网络能够在任意查询位置以SO(3)-等变性估计点级属性,与大多数现有基线相比表现出更优越的性能。PIPS策略根据输入动态确定采样位置,从而提升了网络的准确性和训练效率。我们的方法在三个姿态估计数据集上超越了现有技术水平。值得注意的是,它在极具挑战性的场景中表现出显著的改进,例如以未见过的姿态、高度遮挡、新几何形状和严重噪声捕获的物体。