We introduce a technique for the reconstruction of high-fidelity surfaces from multi-view images. Our technique uses a new point-based representation, the dipole sum, which generalizes the winding number to allow for interpolation of arbitrary per-point attributes in point clouds with noisy or outlier points. Using dipole sums allows us to represent implicit geometry and radiance fields as per-point attributes of a point cloud, which we initialize directly from structure from motion. We additionally derive Barnes-Hut fast summation schemes for accelerated forward and reverse-mode dipole sum queries. These queries facilitate the use of ray tracing to efficiently and differentiably render images with our point-based representations, and thus update their point attributes to optimize scene geometry and appearance. We evaluate this inverse rendering framework against state-of-the-art alternatives, based on ray tracing of neural representations or rasterization of Gaussian point-based representations. Our technique significantly improves reconstruction quality at equal runtimes, while also supporting more general rendering techniques such as shadow rays for direct illumination. In the supplement, we provide interactive visualizations of our results.
翻译:本文提出一种从多视角图像重建高保真表面的技术。该方法采用一种新型基于点的表示方法——偶极子求和,该方法将环绕数推广至允许在含有噪声或离群点的点云中对任意逐点属性进行插值。利用偶极子求和,我们能够将隐式几何与辐射场表示为点云的逐点属性,这些属性可直接从运动恢复结构初始化获得。此外,我们推导了Barnes-Hut快速求和方案,以加速前向与反向模式的偶极子求和查询。这些查询支持通过光线追踪高效且可微分地渲染基于点的表示图像,从而更新其点属性以优化场景几何与外观。我们将此逆向渲染框架与基于神经表示光线追踪或高斯点表示光栅化的先进替代方案进行比较评估。在相同运行时间下,本技术显著提升了重建质量,同时支持更通用的渲染技术(如直接光照的阴影光线)。补充材料中提供了交互式可视化结果。