This study analyzes the derivative-free loss method to solve a certain class of elliptic PDEs and fluid problems using neural networks. The approach leverages the Feynman-Kac formulation, incorporating stochastic walkers and their averaged values. We investigate how the time interval associated with the Feynman-Kac representation and the walker size influence computational efficiency, trainability, and sampling errors. Our analysis shows that the training loss bias scales proportionally with the time interval and the spatial gradient of the neural network, while being inversely proportional to the walker size. Moreover, we demonstrate that the time interval must be sufficiently long to enable effective training. These results indicate that the walker size can be chosen as small as possible, provided it satisfies the optimal lower bound determined by the time interval. Finally, we present numerical experiments that support our theoretical findings.
翻译:本研究分析了利用神经网络求解一类椭圆型偏微分方程及流体问题的无导数损失方法。该方法基于费曼-卡茨公式,结合了随机游走粒子及其平均值。我们探究了与费曼-卡茨表示相关的时间区间以及游走粒子规模如何影响计算效率、可训练性和采样误差。分析表明,训练损失偏差与时间区间及神经网络的空间梯度成正比,而与游走粒子规模成反比。此外,我们证明了时间区间必须足够长才能实现有效训练。这些结果表明,只要满足由时间区间决定的最优下界,游走粒子规模可以尽可能小。最后,我们提供了支持理论结果的数值实验。