We study first-order algorithms that are uniformly stable for empirical risk minimization (ERM) problems that are convex and smooth with respect to $p$-norms, $p \geq 1$. We propose a black-box reduction method that, by employing properties of uniformly convex regularizers, turns an optimization algorithm for H\"older smooth convex losses into a uniformly stable learning algorithm with optimal statistical risk bounds on the excess risk, up to a constant factor depending on $p$. Achieving a black-box reduction for uniform stability was posed as an open question by (Attia and Koren, 2022), which had solved the Euclidean case $p=2$. We explore applications that leverage non-Euclidean geometry in addressing binary classification problems.
翻译:我们研究针对经验风险最小化问题的一阶算法,该问题关于$p$-范数($p \geq 1$)具有凸性和光滑性。我们提出一种黑盒归约方法,通过利用一致凸正则化器的特性,将适用于H\"older光滑凸损失的优化算法转化为具有均匀稳定性的学习算法,该算法在超额风险上能达到最优统计风险界(至多相差一个依赖于$p$的常数因子)。实现均匀稳定性的黑盒归约是(Attia和Koren,2022)提出的开放性问题,该研究已解决欧几里得情形$p=2$。我们探索了利用非欧几里得几何处理二分类问题的应用场景。