We solve high-dimensional steady-state Fokker-Planck equations on the whole space by applying tensor neural networks. The tensor networks are a tensor product of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper truncated bounded domain or numerical support for the Fokker-Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker-Planck equations from two to ten dimensions.
翻译:我们通过应用张量神经网络求解全空间上的高维稳态福克-普朗克方程。该张量网络是一维前馈网络的张量积,或数个选定径向基函数的线性组合。使用张量前馈网络能够高效利用主流Python包中的自动微分功能,而采用径向基函数则可完全避免自动微分——这一过程在高维情形下计算成本极高。随后,我们采用物理信息神经网络与随机梯度下降方法对张量网络进行学习。关键步骤在于确定福克-普朗克方程的适定截断有界域或数值支撑区。为优化张量径向基函数网络的训练,我们对参数施加约束,从而获得相对较高的计算精度。数值实验表明,物理信息机器学习中的张量神经网络在求解二维至十维的稳态福克-普朗克方程时具有高效性。