In this work, we address a foundational question in the theoretical analysis of the Deep Ritz Method (DRM) under the over-parameteriztion regime: Given a target precision level, how can one determine the appropriate number of training samples, the key architectural parameters of the neural networks, the step size for the projected gradient descent optimization procedure, and the requisite number of iterations, such that the output of the gradient descent process closely approximates the true solution of the underlying partial differential equation to the specified precision?
翻译:本研究解决了过参数化机制下深度里茨方法理论分析中的一个基础性问题:给定目标精度水平,如何确定训练样本数量、神经网络关键架构参数、投影梯度下降优化过程的步长以及所需迭代次数,使得梯度下降过程的输出能以指定精度逼近底层偏微分方程的真实解?