Constrained optimization with multiple functional inequality constraints has significant applications in machine learning. This paper examines a crucial subset of such problems where both the objective and constraint functions are weakly convex. Existing methods often face limitations, including slow convergence rates or reliance on double-loop algorithmic designs. To overcome these challenges, we introduce a novel single-loop penalty-based stochastic algorithm. Following the classical exact penalty method, our approach employs a {\bf hinge-based penalty}, which permits the use of a constant penalty parameter, enabling us to achieve a {\bf state-of-the-art complexity} for finding an approximate Karush-Kuhn-Tucker (KKT) solution. We further extend our algorithm to address finite-sum coupled compositional objectives, which are prevalent in artificial intelligence applications, establishing improved complexity over existing approaches. Finally, we validate our method through experiments on fair learning with receiver operating characteristic (ROC) fairness constraints and continual learning with non-forgetting constraints.
翻译:多函数不等式约束的优化问题在机器学习中具有重要应用。本文研究此类问题的一个关键子类,其中目标函数与约束函数均为弱凸函数。现有方法常面临收敛速度缓慢或依赖双循环算法设计等局限。为克服这些挑战,我们提出一种新颖的基于惩罚的单循环随机算法。遵循经典精确惩罚法,本方法采用**基于铰链的惩罚函数**,允许使用恒定惩罚参数,从而在寻找近似Karush-Kuhn-Tucker(KKT)解时达到**最优复杂度**。我们进一步将算法扩展至处理人工智能应用中普遍存在的有限和耦合复合目标函数,建立了优于现有方法的复杂度结果。最后,通过带接收者操作特征(ROC)公平约束的公平学习实验以及带非遗忘约束的持续学习实验,验证了本方法的有效性。