Learning efficient representations of local features is a key challenge in feature volume-based 3D neural mapping, especially in large-scale environments. In this paper, we introduce Decomposition-based Neural Mapping (DNMap), a storage-efficient large-scale 3D mapping method that employs a discrete representation based on a decomposition strategy. This decomposition strategy aims to efficiently capture repetitive and representative patterns of shapes by decomposing each discrete embedding into component vectors that are shared across the embedding space. Our DNMap optimizes a set of component vectors, rather than entire discrete embeddings, and learns composition rather than indexing the discrete embeddings. Furthermore, to complement the mapping quality, we additionally learn low-resolution continuous embeddings that require tiny storage space. By combining these representations with a shallow neural network and an efficient octree-based feature volume, our DNMap successfully approximates signed distance functions and compresses the feature volume while preserving mapping quality. Our source code is available at https://github.com/minseong-p/dnmap.
翻译:学习局部特征的高效表示是基于特征体素的三维神经建图中的一个关键挑战,尤其是在大规模环境中。本文提出了一种基于分解的神经建图方法(DNMap),这是一种存储高效的大规模三维建图方法,采用基于分解策略的离散表示。该分解策略旨在通过将每个离散嵌入分解为在嵌入空间中共享的分量向量,来高效捕捉形状的重复性和代表性模式。我们的DNMap优化一组分量向量,而非整个离散嵌入,并学习组合而非索引离散嵌入。此外,为补充建图质量,我们还额外学习了需要极小存储空间的低分辨率连续嵌入。通过将这些表示与浅层神经网络和高效的基于八叉树的特征体素相结合,我们的DNMap成功近似了有符号距离函数,并在保持建图质量的同时压缩了特征体素。我们的源代码可在 https://github.com/minseong-p/dnmap 获取。