Deep learning-based methods have shown remarkable effectiveness in solving PDEs, largely due to their ability to enable fast simulations once trained. However, despite the availability of high-performance computing infrastructure, many critical applications remain constrained by the substantial computational costs associated with generating large-scale, high-quality datasets and training models. In this work, inspired by studies on the structure of Green's functions for elliptic PDEs, we introduce Neural-HSS, a parameter-efficient architecture built upon the Hierarchical Semi-Separable (HSS) matrix structure that is provably data-efficient for a broad class of PDEs. We theoretically analyze the proposed architecture, proving that it satisfies exactness properties even in very low-data regimes. We also investigate its connections with other architectural primitives, such as the Fourier neural operator layer and convolutional layers. We experimentally validate the data efficiency of Neural-HSS on the three-dimensional Poisson equation over a grid of two million points, demonstrating its superior ability to learn from data generated by elliptic PDEs in the low-data regime while outperforming baseline methods. Finally, we demonstrate its capability to learn from data arising from a broad class of PDEs in diverse domains, including electromagnetism, fluid dynamics, and biology.
翻译:基于深度学习的方法在求解偏微分方程方面已展现出显著的有效性,这主要得益于其一旦训练完成即可实现快速仿真的能力。然而,尽管高性能计算基础设施已经可用,许多关键应用仍受限于生成大规模高质量数据集和训练模型所带来的巨大计算成本。受椭圆型偏微分方程格林函数结构研究的启发,本文提出了Neural-HSS——一种基于层次半可分矩阵结构构建的参数高效架构,该架构被证明对于一大类偏微分方程具有数据高效性。我们从理论上分析了所提出的架构,证明即使在极低数据量情况下它也能满足精确性要求。我们还探讨了其与其他架构原语(如傅里叶神经算子层和卷积层)的关联。我们在包含两百万个点的网格上对三维泊松方程进行了实验验证,证明了Neural-HSS在低数据量情况下从椭圆型偏微分方程生成的数据中学习的卓越能力,其性能优于基线方法。最后,我们展示了该架构能够学习来自广泛领域(包括电磁学、流体动力学和生物学)中一大类偏微分方程所产生数据的能力。