Surfaces are typically represented as meshes, which can be extracted from volumetric fields via meshing or optimized directly as surface parameterizations. Volumetric representations occupy 3D space and have a large effective receptive field along rays, enabling stable and efficient optimization via volumetric rendering; however, subsequent meshing often produces overly dense meshes and introduces accumulated errors. In contrast, pure surface methods avoid meshing but capture only boundary geometry with a single-layer receptive field, making it difficult to learn intricate geometric details and increasing reliance on priors (e.g., shading or normals). We bridge this gap by differentiably turning a surface representation into a volumetric one, enabling end-to-end surface reconstruction via volumetric rendering to model complex geometries. Specifically, we soften a mesh into multiple semi-transparent layers that remain differentiable with respect to the base mesh, endowing it with a controllable 3D receptive field. Combined with a splatting-based renderer and a topology-control strategy, our method can be optimized in about 20 minutes to achieve accurate surface reconstruction while substantially improving mesh quality.
翻译:表面通常以网格形式表示,这些网格可通过网格化从体素场中提取,或直接作为表面参数化进行优化。体素表示占据三维空间并沿光线具有较大的有效感受野,能够通过体素渲染实现稳定高效的优化;然而,后续的网格化过程往往产生过于密集的网格并引入累积误差。相比之下,纯表面方法虽避免了网格化,但仅通过单层感受野捕捉边界几何信息,导致难以学习复杂几何细节,并增强了对先验信息(如着色或法线)的依赖。我们通过将表面表示可微分地转化为体素表示来弥合这一差距,从而通过体素渲染实现端到端的表面重建以建模复杂几何结构。具体而言,我们将网格柔化为多个半透明层,这些层相对于基础网格保持可微性,从而赋予其可控的三维感受野。结合基于溅射的渲染器与拓扑控制策略,我们的方法可在约20分钟内完成优化,实现精确的表面重建,同时显著提升网格质量。