Existing deep learning methods for the reconstruction and denoising of point clouds rely on small datasets of 3D shapes. We circumvent the problem by leveraging deep learning methods trained on billions of images. We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering by exploiting prior knowledge distilled from image-based deep learning models. To improve reconstruction in constraint settings, we regularize the training of a differentiable renderer with hybrid surface and appearance by introducing semantic consistency supervision. In addition, we propose a pipeline to finetune Stable Diffusion to denoise renderings of noisy point clouds and we demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision. We compare our method with DSS and PointRadiance and achieved higher quality 3D reconstruction on the Sketchfab Testset and SCUT Dataset.
翻译:现有基于深度学习的点云重建与去噪方法依赖于小型三维形状数据集。我们通过利用基于数十亿图像训练的深度学习方法规避了这一困境。提出了一种从少量图像重建点云,并利用基于图像的深度学习模型蒸馏得到的先验知识,通过渲染来去噪点云的方法。为了改善约束条件下的重建效果,我们通过引入语义一致性监督,对混合表面与外观的可微渲染器训练进行正则化。此外,我们提出了一种微调Stable Diffusion的流水线,用于去噪含噪点云的渲染图像,并展示了这些学习到的滤波器如何能够去除无需三维监督的点云噪声。我们将所提方法与DSS和PointRadiance进行了比较,在Sketchfab测试集和SCUT数据集上实现了更高质量的三维重建。