We solve high-dimensional steady-state Fokker-Planck equations on the whole space by applying tensor neural networks. The tensor networks are a linear combination of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation (in physical variables) in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper bounded domain or numerical support for the Fokker-Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker-Planck equations from two to ten dimensions.
翻译:本文采用张量神经网络求解全空间上的高维稳态Fokker-Planck方程。张量网络由一维前馈网络的张量积线性组合或若干选定径向基函数的线性组合构成。使用张量前馈网络能够有效利用主流Python软件包中的自动微分功能(针对物理变量),而采用径向基函数则可完全避免在高维空间中计算代价较高的自动微分过程。我们通过物理信息神经网络与随机梯度下降方法对张量网络进行训练。关键步骤之一是为Fokker-Planck方程确定合适的有限计算区域或数值支撑域。为提升张量径向基函数网络的训练效果,我们对参数施加约束条件,从而获得较高的计算精度。数值实验表明,基于物理信息机器学习的张量神经网络能够有效求解二维至十维的稳态Fokker-Planck方程。