Point clouds captured by scanning sensors are often perturbed by noise, which have a highly negative impact on downstream tasks (e.g. surface reconstruction and shape understanding). Previous works mostly focus on training neural networks with noisy-clean point cloud pairs for learning denoising priors, which requires extensively manual efforts. In this work, we introduce U-CAN, an Unsupervised framework for point cloud denoising with Consistency-Aware Noise2Noise matching. Specifically, we leverage a neural network to infer a multi-step denoising path for each point of a shape or scene with a noise to noise matching scheme. We achieve this by a novel loss which enables statistical reasoning on multiple noisy point cloud observations. We further introduce a novel constraint on the denoised geometry consistency for learning consistency-aware denoising patterns. We justify that the proposed constraint is a general term which is not limited to 3D domain and can also contribute to the area of 2D image denoising. Our evaluations under the widely used benchmarks in point cloud denoising, upsampling and image denoising show significant improvement over the state-of-the-art unsupervised methods, where U-CAN also produces comparable results with the supervised methods.
翻译:通过扫描传感器捕获的点云常受到噪声干扰,这对下游任务(如表面重建和形状理解)具有显著的负面影响。先前的研究多集中于利用噪声-干净点云对训练神经网络以学习去噪先验,这需要大量人工标注。本文提出U-CAN,一种基于一致性感知噪声到噪声匹配的无监督点云去噪框架。具体而言,我们利用神经网络通过噪声到噪声匹配方案,为形状或场景中的每个点推断多步去噪路径。这通过一种新颖的损失函数实现,该函数支持对多个噪声点云观测进行统计推理。我们进一步引入一种针对去噪几何一致性的新约束,以学习一致性感知的去噪模式。我们论证了该约束是一个通用项,不仅限于三维领域,还可为二维图像去噪领域做出贡献。在点云去噪、上采样和图像去噪的广泛基准测试中,我们的评估显示U-CAN相较于最先进的无监督方法有显著改进,同时其性能与有监督方法相当。