Implicit functions such as Neural Radiance Fields (NeRFs), occupancy networks, and signed distance functions (SDFs) have become pivotal in computer vision for reconstructing detailed object shapes from sparse views. Achieving optimal performance with these models can be challenging due to the extreme sparsity of inputs and distribution shifts induced by data corruptions. To this end, large, noise-free synthetic datasets can serve as shape priors to help models fill in gaps, but the resulting reconstructions must be approached with caution. Uncertainty estimation is crucial for assessing the quality of these reconstructions, particularly in identifying areas where the model is uncertain about the parts it has inferred from the prior. In this paper, we introduce Dropsembles, a novel method for uncertainty estimation in tuned implicit functions. We demonstrate the efficacy of our approach through a series of experiments, starting with toy examples and progressing to a real-world scenario. Specifically, we train a Convolutional Occupancy Network on synthetic anatomical data and test it on low-resolution MRI segmentations of the lumbar spine. Our results show that Dropsembles achieve the accuracy and calibration levels of deep ensembles but with significantly less computational cost.
翻译:隐函数,如神经辐射场(NeRFs)、占据网络和符号距离函数(SDFs),已成为计算机视觉中从稀疏视图重建详细物体形状的关键技术。由于输入数据的极度稀疏性以及数据损坏引起的分布偏移,使这些模型达到最佳性能具有挑战性。为此,大型、无噪声的合成数据集可以作为形状先验,帮助模型填补信息空白,但必须谨慎对待由此产生的重建结果。不确定性估计对于评估这些重建的质量至关重要,特别是在识别模型对其从先验中推断出的部分存在不确定性的区域。本文提出了一种用于微调隐函数不确定性估计的新方法——Dropsembles。我们通过一系列实验证明了该方法的有效性,从简单示例开始,逐步推进到真实世界场景。具体而言,我们在合成解剖数据上训练了一个卷积占据网络,并在腰椎的低分辨率磁共振成像分割数据上进行了测试。结果表明,Dropsembles 在达到深度集成方法精度和校准水平的同时,显著降低了计算成本。