Key part of robotics, augmented reality, and digital inspection is dense 3D reconstruction from depth observations. Traditional volumetric fusion techniques, including truncated signed distance functions (TSDF), enable efficient and deterministic geometry reconstruction; however, they depend on heuristic weighting and fail to transparently convey uncertainty in a systematic way. Recent neural implicit methods, on the other hand, get very high fidelity but usually need a lot of GPU power for optimization and aren't very easy to understand for making decisions later on. This work presents BayesFusion-SDF, a CPU-centric probabilistic signed distance fusion framework that conceptualizes geometry as a sparse Gaussian random field with a defined posterior distribution over voxel distances. First, a rough TSDF reconstruction is used to create an adaptive narrow-band domain. Then, depth observations are combined using a heteroscedastic Bayesian formulation that is solved using sparse linear algebra and preconditioned conjugate gradients. Randomized diagonal estimators are a quick way to get an idea of posterior uncertainty. This makes it possible to extract surfaces and plan the next best view while taking into account uncertainty. Tests on a controlled ablation scene and a CO3D object sequence show that the new method is more accurate geometrically than TSDF baselines and gives useful estimates of uncertainty for active sensing. The proposed formulation provides a clear and easy-to-use alternative to GPU-heavy neural reconstruction methods while still being able to be understood in a probabilistic way and acting in a predictable way. GitHub: https://mazumdarsoumya.github.io/BayesFusionSDF
翻译:机器人学、增强现实和数字检测的关键部分在于从深度观测中进行密集三维重建。传统的体素融合技术,包括截断符号距离函数(TSDF),能够实现高效且确定性的几何重建;然而,它们依赖于启发式加权,且无法以系统化的方式透明地传达不确定性。另一方面,近期的神经隐式方法虽然能获得极高的保真度,但通常需要大量GPU算力进行优化,并且对于后续的决策制定而言不易理解。本研究提出了BayesFusion-SDF,一个以CPU为中心的概率符号距离融合框架,该框架将几何概念化为一个具有定义在体素距离上的后验分布的稀疏高斯随机场。首先,利用粗略的TSDF重建创建一个自适应窄带域。随后,通过一个异方差贝叶斯公式融合深度观测,并使用稀疏线性代数及预处理共轭梯度法求解。随机化对角线估计器是一种快速获取后验不确定性近似的方法。这使得在考虑不确定性的同时提取表面并规划下一个最佳视点成为可能。在受控消融场景和CO3D物体序列上的测试表明,新方法在几何精度上优于TSDF基线,并为主动感知提供了有用的不确定性估计。所提出的框架为计算密集的GPU神经重建方法提供了一个清晰且易于使用的替代方案,同时保持了概率层面的可解释性和可预测的行为特性。GitHub:https://mazumdarsoumya.github.io/BayesFusionSDF