Neural operator learning accelerates PDE solution by approximating operators as mappings between continuous function spaces. Yet in many engineering settings, varying geometry induces discrete structural changes, including topological changes, abrupt changes in boundary conditions or boundary types, and changes in the computational domain, which break the smooth-variation premise. Here we introduce Discrete Solution Operator Learning (DiSOL), a complementary paradigm that learns discrete solution procedures rather than continuous function-space operators. DiSOL factorizes the solver into learnable stages that mirror classical discretizations: local contribution encoding, multiscale assembly, and implicit solution reconstruction on an embedded grid, thereby preserving procedure-level consistency while adapting to geometry-dependent discrete structures. Across geometry-dependent Poisson, advection-diffusion, linear elasticity, as well as spatiotemporal heat conduction problems, DiSOL produces stable and accurate predictions under both in-distribution and strongly out-of-distribution geometries, including discontinuous boundaries and topological changes. These results highlight the need for procedural operator representations in geometry-dominated problems and position discrete solution operator learning as a distinct, complementary direction in scientific machine learning.
翻译:神经算子学习通过将算子近似为连续函数空间之间的映射来加速偏微分方程求解。然而,在许多工程场景中,变化的几何结构会引发离散的结构性改变,包括拓扑变化、边界条件或边界类型的突变以及计算域的改变,这些均破坏了平滑变化的前提。本文提出离散解算子学习,这是一种互补范式,其学习离散求解过程而非连续函数空间算子。DiSOL将求解器分解为可学习的阶段,这些阶段对应经典离散化步骤:局部贡献编码、多尺度组装以及嵌入网格上的隐式解重构,从而在适应几何相关离散结构的同时保持过程层面的一致性。在几何相关的泊松方程、对流扩散方程、线性弹性问题以及时空热传导问题中,DiSOL在分布内和强分布外几何条件下(包括不连续边界和拓扑变化)均能产生稳定且准确的预测。这些结果凸显了在几何主导问题中对过程式算子表示的需求,并将离散解算子学习确立为科学机器学习中一个独特且互补的研究方向。