Learning-based isosurface extraction methods have recently emerged as a robust and efficient alternative to axiomatic techniques. However, the vast majority of such approaches rely on supervised training with axiomatically computed ground truths, thus potentially inheriting biases and data artifacts of the corresponding axiomatic methods. Steering away from such dependencies, we propose a self-supervised training scheme for the Neural Dual Contouring meshing framework, resulting in our method: Self-Supervised Dual Contouring (SDC). Instead of optimizing predicted mesh vertices with supervised training, we use two novel self-supervised loss functions that encourage the consistency between distances to the generated mesh up to the first order. Meshes reconstructed by SDC surpass existing data-driven methods in capturing intricate details while being more robust to possible irregularities in the input. Furthermore, we use the same self-supervised training objective linking inferred mesh and input SDF, to regularize the training process of Deep Implicit Networks (DINs). We demonstrate that the resulting DINs produce higher-quality implicit functions, ultimately leading to more accurate and detail-preserving surfaces compared to prior baselines for different input modalities. Finally, we demonstrate that our self-supervised losses improve meshing performance in the single-view reconstruction task by enabling joint training of predicted SDF and resulting output mesh. We open-source our code at https://github.com/Sentient07/SDC
翻译:基于学习的等值面提取方法最近作为公理化技术的稳健高效替代方案而兴起。然而,绝大多数此类方法依赖于使用公理化计算所得真实值进行监督训练,因此可能继承相应公理化方法的偏差和数据伪影。为摆脱此类依赖,我们为神经双重轮廓化网格生成框架提出了一种自监督训练方案,形成了我们的方法:自监督双重轮廓化(SDC)。该方法不再通过监督训练优化预测的网格顶点,而是采用两种新颖的自监督损失函数,促进生成网格的距离一致性直至一阶。SDC重建的网格在捕捉复杂细节方面超越了现有数据驱动方法,同时对输入中可能的不规则性具有更强的鲁棒性。此外,我们利用相同的连接推断网格与输入符号距离函数的自监督训练目标,来正则化深度隐式网络(DINs)的训练过程。我们证明,由此得到的DINs能生成更高质量的隐式函数,最终为不同输入模态带来比先前基线更精确且保持细节的表面。最后,我们通过联合训练预测的符号距离函数与输出网格,展示了自监督损失在单视图重建任务中提升网格生成性能的效果。我们的代码已在 https://github.com/Sentient07/SDC 开源。