In this paper, we propose a new multilevel stochastic framework for the solution of optimization problems. The proposed approach uses random regularized first-order models that exploit an available hierarchical description of the problem, being either in the classical variable space or in the function space, meaning that different levels of accuracy for the objective function are available. The converge analysis of the method is conducted and its numerical behavior is tested on the solution of finite-sum minimization problems. Indeed, the multilevel framework is tailored to the solution of such problems resulting in fact in a nontrivial variance reduction technique with adaptive step-size that outperforms standard approaches when solving nonconvex problems. Differently from classical deterministic multilevel methods, our stochastic method does not require the finest approximation to coincide with the original objective function. This allows to avoid the evaluation of the full sum in finite-sum minimization problems, opening at the solution of classification problems with large data sets.
翻译:本文提出了一种新的多层随机框架用于求解优化问题。所提出的方法利用随机正则化一阶模型,该方法能够利用问题在经典变量空间或函数空间中可用的层次化描述,这意味着可以获得目标函数的不同精度级别。我们对方法进行了收敛性分析,并在求解有限和最小化问题上测试了其数值行为。事实上,该多层框架专门针对此类问题的求解而设计,实际上形成了一种具有自适应步长的非平凡方差缩减技术,在求解非凸问题时优于标准方法。与经典的确定性多层方法不同,我们的随机方法不要求最精细的近似与原始目标函数完全一致。这使得在求解有限和最小化问题时可以避免计算完整的求和项,从而为求解具有大型数据集的分类问题开辟了途径。