We present a simple algorithm to approximate the viscosity solution of Hamilton-Jacobi (HJ) equations by means of an artificial deep neural network. The algorithm uses a stochastic gradient descent-based method to minimize the least square principle defined by a monotone, consistent numerical scheme. We analyze the least square principle's critical points and derive conditions that guarantee that any critical point approximates the sought viscosity solution. The use of a deep artificial neural network on a finite difference scheme lifts the restriction of conventional finite difference methods that rely on computing functions on a fixed grid. This feature makes it possible to solve HJ equations posed in higher dimensions where conventional methods are infeasible. We demonstrate the efficacy of our algorithm through numerical studies on various canonical HJ equations across different dimensions, showcasing its potential and versatility.
翻译:本文提出一种利用人工深度神经网络逼近哈密顿-雅可比方程粘性解的简明算法。该算法采用基于随机梯度下降的方法,最小化由单调相容数值格式定义的最小二乘原理。我们分析了该最小二乘原理的临界点,并推导出确保任意临界点逼近目标粘性解的充分条件。在有限差分格式中引入深度人工神经网络,突破了传统有限差分方法依赖固定网格函数计算的限制。这一特性使得求解高维空间中的哈密顿-雅可比方程成为可能,而传统方法在此类场景中往往难以实施。通过对不同维度的多种典型哈密顿-雅可比方程进行数值研究,我们验证了所提算法的有效性与普适性,展现了其应用潜力。