We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one end, to increasing sample size that achieves the full sample eventually, on the other end of the spectrum. A non-monotone line search is used for the step size update, while the penalty parameter is also adaptive. The proposed algorithm avoids costly projections, which, together with the sample size update, may yield significant computational cost savings. Also, the proposed method can be viewed as a transition of an additional sampling approach for unconstrained and linear constrained problems, to a more general class with non-linear constraints. The almost sure convergence is proved under a standard set of assumptions for this framework, while numerical experiments on both academic and real-data based machine learning problems demonstrate the effectiveness of the proposed approach.
翻译:本文提出了一种新颖的算法,用于求解非凸、非线性等式约束的有限和优化问题。该算法将一种用于样本量更新的附加采样策略,融入著名的二次惩罚方法框架中。因此,根据具体问题,所得方法可能展现出从一端的小批量采样,到另一端最终实现全样本的递增样本量策略等多种样本量策略。步长更新采用非单调线搜索,同时惩罚参数也是自适应的。所提算法避免了代价高昂的投影操作,结合样本量更新策略,可显著节省计算成本。此外,该方法可视为将无约束和线性约束问题的附加采样方法,推广到具有非线性约束的更一般问题类别。在针对该框架的标准假设集下,证明了算法的几乎必然收敛性。在学术和基于真实数据的机器学习问题上的数值实验,验证了所提方法的有效性。