Physics-informed neural networks solve partial differential equations by training neural networks. Since this method approximates infinite-dimensional PDE solutions with finite collocation points, minimizing discretization errors by selecting suitable points is essential for accelerating the learning process. Inspired by number theoretic methods for numerical analysis, we introduce good lattice training and periodization tricks, which ensure the conditions required by the theory. Our experiments demonstrate that GLT requires 2-7 times fewer collocation points, resulting in lower computational cost, while achieving competitive performance compared to typical sampling methods.
翻译:物理信息神经网络通过训练神经网络来求解偏微分方程。由于该方法使用有限配置点来逼近无限维偏微分方程解,因此通过选择合适的配置点以最小化离散化误差,对于加速学习过程至关重要。受数值分析中数论方法的启发,我们引入了良好格点训练与周期化技巧,这些方法确保了理论所需的条件。实验表明,与典型的采样方法相比,良好格点训练所需的配置点数量减少了2至7倍,从而降低了计算成本,同时实现了具有竞争力的性能。