It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision $\varepsilon>0$ grows exponentially in the PDE dimension and/or the reciprocal of $\varepsilon$. Recently, certain deep learning based approximation methods for PDEs have been proposed and various numerical simulations for such methods suggest that deep neural network (DNN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating DNNs grows at most polynomially in both the PDE dimension $d\in\mathbb{N}$ and the reciprocal of the prescribed accuracy $\varepsilon>0$. There are now also a few rigorous results in the scientific literature which substantiate this conjecture by proving that DNNs overcome the curse of dimensionality in approximating solutions of PDEs. Each of these results establishes that DNNs overcome the curse of dimensionality in approximating suitable PDE solutions at a fixed time point $T>0$ and on a compact cube $[a,b]^d$ in space but none of these results provides an answer to the question whether the entire PDE solution on $[0,T]\times [a,b]^d$ can be approximated by DNNs without the curse of dimensionality. It is precisely the subject of this article to overcome this issue. More specifically, the main result of this work in particular proves for every $a\in\mathbb{R}$, $ b\in (a,\infty)$ that solutions of certain Kolmogorov PDEs can be approximated by DNNs on the space-time region $[0,T]\times [a,b]^d$ without the curse of dimensionality.
翻译:近似求解高维偏微分方程(PDEs)是应用数学领域最具挑战性的问题之一。科学文献中大多数PDE数值近似方法都受到所谓"维度灾难"的困扰,即:在对应的近似方案中,为获得$\varepsilon>0$的近似精度所需的计算操作数量,会随PDE维度$d$和/或$1/\varepsilon$呈指数级增长。近年来,学界提出了若干基于深度学习的PDE近似方法,对此类方法的数值模拟表明:深度神经网络(DNN)近似可能确实具备克服维度灾难的潜力,即描述近似DNN所需的实参数数量在PDE维度$d\in\mathbb{N}$和预设精度倒数$1/\varepsilon$($\varepsilon>0$)上至多呈多项式增长。目前科学文献中已有若干严格结果证实了这一猜想,证明DNN在近似PDE解时能够克服维度灾难。然而这些结果均只证明了DNN在固定时间点$T>0$和空间紧致立方体$[a,b]^d$上近似特定PDE解时可克服维度灾难,尚未能回答"整个时空区域$[0,T]\times [a,b]^d$上的PDE解能否被DNN无维度灾难地近似"这一关键问题。本文的研究目标正是要突破这一局限。具体而言,本工作的主要结果证明:对于任意$a\in\mathbb{R}$和$b\in (a,\infty)$,特定Kolmogorov PDE的解可以在时空区域$[0,T]\times [a,b]^d$上被DNN近似,且不遭受维度灾难。