Stochastic variance reduced gradient (SVRG) is an accelerated version of stochastic gradient descent based on variance reduction, and is promising for solving large-scale inverse problems. In this work, we analyze SVRG and a regularized version that incorporates a priori knowledge of the problem, for solving linear inverse problems in Hilbert spaces. We prove that, with suitable constant step size schedules and regularity conditions, the regularized SVRG can achieve optimal convergence rates in terms of the noise level without any early stopping rules, and standard SVRG is also optimal for problems with nonsmooth solutions under a priori stopping rules. The analysis is based on an explicit error recursion and suitable prior estimates on the inner loop updates with respect to the anchor point. Numerical experiments are provided to complement the theoretical analysis.
翻译:随机方差缩减梯度(SVRG)是一种基于方差缩减的加速随机梯度下降方法,对于求解大规模反问题具有良好前景。本文分析了SVRG及其融入问题先验知识的正则化版本,用于求解希尔伯特空间中的线性反问题。我们证明:在合适的恒定步长调度与正则性条件下,正则化SVRG无需任何早停准则即可达到关于噪声水平的最优收敛速率;而标准SVRG在先验停止准则下对于非光滑解问题同样具有最优性。该分析基于显式误差递推关系以及对锚点内循环更新的先验估计。数值实验为理论分析提供了补充验证。