Incomplete point clouds captured by 3D sensors often result in the loss of both geometric and semantic information. Most existing point cloud completion methods are built on rotation-variant frameworks trained with data in canonical poses, limiting their applicability in real-world scenarios. While data augmentation with random rotations can partially mitigate this issue, it significantly increases the learning burden and still fails to guarantee robust performance under arbitrary poses. To address this challenge, we propose the Rotation-Equivariant Anchor Transformer (REVNET), a novel framework built upon the Vector Neuron (VN) network for robust point cloud completion under arbitrary rotations. To preserve local details, we represent partial point clouds as sets of equivariant anchors and design a VN Missing Anchor Transformer to predict the positions and features of missing anchors. Furthermore, we extend VN networks with a rotation-equivariant bias formulation and a ZCA-based layer normalization to improve feature expressiveness. Leveraging the flexible conversion between equivariant and invariant VN features, our model can generate point coordinates with greater stability. Experimental results show that our method outperforms state-of-the-art approaches on the synthetic MVP dataset in the equivariant setting. On the real-world KITTI dataset, REVNET delivers competitive results compared to non-equivariant networks, without requiring input pose alignment. The source code will be released on GitHub under URL: https://github.com/nizhf/REVNET.
翻译:三维传感器捕获的不完整点云通常会导致几何和语义信息的丢失。现有的大多数点云补全方法建立在旋转敏感的框架之上,这些框架使用规范姿态的数据进行训练,限制了其在真实场景中的适用性。虽然通过随机旋转进行数据增强可以部分缓解此问题,但这显著增加了学习负担,并且仍然无法保证在任意姿态下的鲁棒性能。为应对这一挑战,我们提出了旋转等变锚点Transformer(REVNET),这是一个基于向量神经元(VN)网络构建的新型框架,用于在任意旋转下实现鲁棒的点云补全。为保留局部细节,我们将部分点云表示为等变锚点的集合,并设计了一个VN缺失锚点Transformer来预测缺失锚点的位置和特征。此外,我们扩展了VN网络,引入了一种旋转等变的偏置公式和一种基于ZCA的层归一化方法,以提升特征表达能力。利用等变与不变VN特征之间的灵活转换,我们的模型能够以更高的稳定性生成点坐标。实验结果表明,在等变设置下,我们的方法在合成MVP数据集上优于现有最先进方法。在真实世界KITTI数据集上,REVNET无需输入姿态对齐,即可与非等变网络取得具有竞争力的结果。源代码将在GitHub上发布,地址为:https://github.com/nizhf/REVNET。