In this work, we present a novel variant of the stochastic gradient descent method termed as iteratively regularized stochastic gradient descent (IRSGD) method to solve nonlinear ill-posed problems in Hilbert spaces. Under standard assumptions, we demonstrate that the mean square iteration error of the method converges to zero for exact data. In the presence of noisy data, we first propose a heuristic parameter choice rule (HPCR) based on the method suggested by Hanke and Raus, and then apply the IRSGD method in combination with HPCR. Precisely, HPCR selects the regularization parameter without requiring any a-priori knowledge of the noise level. We show that the method terminates in finitely many steps in case of noisy data and has regularizing features. Further, we discuss the convergence rates of the method using well-known source and other related conditions under HPCR as well as discrepancy principle. To the best of our knowledge, this is the first work that establishes both the regularization properties and convergence rates of a stochastic gradient method using a heuristic type rule in the setting of infinite-dimensional Hilbert spaces. Finally, we provide the numerical experiments to showcase the practical efficacy of the proposed method.
翻译:本文提出了一种新颖的随机梯度下降方法变体,称为迭代正则化随机梯度下降(IRSGD)方法,用于求解希尔伯特空间中的非线性不适定问题。在标准假设下,我们证明了该方法在精确数据情况下的均方迭代误差收敛于零。对于含噪声数据,我们首先基于Hanke和Raus提出的方法设计了一种启发式参数选择规则(HPCR),随后将IRSGD方法与HPCR结合使用。具体而言,HPCR无需任何先验噪声水平信息即可选择正则化参数。我们证明该方法在噪声数据情况下会在有限步内终止,并具有正则化特性。此外,我们讨论了在HPCR及偏差原理下,该方法基于经典源条件及相关条件的收敛速率。据我们所知,这是在无限维希尔伯特空间框架下首次利用启发式规则同时建立随机梯度方法正则化特性与收敛速率分析的研究。最后,我们通过数值实验验证了所提方法的实际有效性。