Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. In this work, we analyze a new data-driven regularized stochastic gradient descent for the efficient numerical solution of a class of nonlinear ill-posed inverse problems in infinite dimensional Hilbert spaces. At each step of the iteration, the method randomly selects one equation from the nonlinear system combined with a corresponding equation from the learned system based on training data to obtain a stochastic estimate of the gradient and then performs a descent step with the estimated gradient. We prove the regularizing property of this method under the tangential cone condition and a priori parameter choice and then derive the convergence rates under the additional source condition and range invariance conditions. Several numerical experiments are provided to complement the analysis.
翻译:随机梯度下降法(SGD)因其在数据规模上的卓越可扩展性,成为求解大规模反问题的一种有前景的方法。本文针对无限维Hilbert空间中一类非线性不适定反问题的高效数值求解,提出并分析了一种新型数据驱动正则化随机梯度下降法。在迭代的每一步,该方法从非线性系统中随机选取一个方程,并联合基于训练数据的学习系统中对应的方程,获得梯度的随机估计,然后利用该估计梯度执行下降步。我们在切向锥条件及先验参数选取下证明了该方法的正则化性质,进而基于附加源条件和值域不变条件推导了收敛速率。最后通过数值实验验证理论分析。