The present article studies the minimization of convex, L-smooth functions defined on a separable real Hilbert space. We analyze regularized stochastic gradient descent (reg-SGD), a variant of stochastic gradient descent that uses a Tikhonov regularization with time-dependent, vanishing regularization parameter. We prove strong convergence of reg-SGD to the minimum-norm solution of the original problem without additional boundedness assumptions. Moreover, we quantify the rate of convergence and optimize the interplay between step-sizes and regularization decay. Our analysis reveals how vanishing Tikhonov regularization controls the flow of SGD and yields stable learning dynamics, offering new insights into the design of iterative algorithms for convex problems, including those that arise in ill-posed inverse problems. We validate our theoretical findings through numerical experiments on image reconstruction and ODE-based inverse problems.
翻译:本文研究定义在可分实希尔伯特空间上的凸L-光滑函数的最小化问题。我们分析了正则化随机梯度下降(reg-SGD),这是一种使用具有时间依赖性且趋于零的正则化参数的Tikhonov正则化的随机梯度下降变体。我们在无需额外有界性假设的条件下,证明了reg-SGD强收敛于原问题的最小范数解。此外,我们量化了收敛速度,并优化了步长与正则化衰减之间的相互作用。我们的分析揭示了趋于零的Tikhonov正则化如何控制SGD的流动并产生稳定的学习动态,这为凸问题(包括不适定反问题中出现的那些问题)迭代算法的设计提供了新的见解。我们通过在图像重建和基于ODE的反问题上的数值实验验证了我们的理论发现。