Implicit Neural Representations (INRs) have emerged as a powerful tool for geometric representation, yet their suitability for physics-based simulation remains underexplored. While metrics like Hausdorff distance quantify surface reconstruction quality, they fail to capture the geometric regularity required for provable numerical performance. This work establishes a unified theoretical framework connecting INR training errors to Partial Differential Equation (PDE) (specifically, linear elliptic equation) solution accuracy. We define the minimal geometric regularity required for INRs to support well-posed boundary value problems and derive \emph{a priori} error estimates linking the neural network's function approximation error to the finite element discretization error. Our analysis reveals that to match the convergence rate of linear finite elements, the INR training loss must scale quadratically relative to the mesh size.
翻译:隐式神经表示已成为几何表示的有力工具,但其在基于物理的仿真中的适用性仍有待深入探索。虽然豪斯多夫距离等度量能够量化表面重建质量,却无法捕捉可证明数值性能所需的几何正则性。本文建立了一个统一的理论框架,将隐式神经表示的训练误差与偏微分方程(具体而言,线性椭圆方程)求解精度联系起来。我们定义了隐式神经表示支撑适定边值问题所需的最小几何正则性,并推导了将神经网络函数逼近误差与有限元离散误差相关联的先验误差估计。分析表明,为匹配线性有限元的收敛速率,隐式神经表示的训练损失必须相对于网格尺寸呈二次方缩放。