Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. In this work, we analyze a new data-driven regularized stochastic gradient descent for the efficient numerical solution of a class of nonlinear ill-posed inverse problems in infinite dimensional Hilbert spaces. At each step of the iteration, the method randomly selects one equation from the nonlinear system combined with a corresponding equation from the learned system based on training data to obtain a stochastic estimate of the gradient and then performs a descent step with the estimated gradient. We prove the regularizing property of this method under the tangential cone condition and a priori parameter choice and then derive the convergence rates under the additional source condition and range invariance conditions. Several numerical experiments are provided to complement the analysis.
翻译:随机梯度下降(SGD)因其在数据规模上优异的可扩展性,成为解决大规模反演问题的一种极具前景的方法。本文分析了一种新的数据驱动正则化随机梯度下降法,用于高效数值求解无限维希尔伯特空间中的一类非线性不适定反演问题。该方法的每一步迭代中,随机从非线性系统中选取一个方程,并结合基于训练数据学习得到的系统中对应的一个方程,以获得梯度的随机估计,随后利用该估计梯度执行下降步骤。我们在切锥条件和先验参数选择下证明了该方法的正则化性质,进而在附加的源条件和值域不变条件下推导了收敛速率。文中提供了若干数值实验以补充理论分析。