We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using $f$-DRO and spectral/$L$-risk minimization. We present Drago, a stochastic primal-dual algorithm that combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems with a fine-grained dependency on primal and dual condition numbers. Theoretical results are supported by numerical benchmarks on regression and classification tasks.
翻译:本文研究具有封闭凸不确定性集的惩罚型分布鲁棒优化(DRO)问题,该框架涵盖了基于$f$-DRO与谱/$L$-风险最小化的学习方法。我们提出Drago算法——一种随机原对偶算法,通过将循环与随机化组件与经过精心正则化的原始更新相结合,实现对偶方差缩减。得益于其独特设计,Drago在强凸-强凹DRO问题上实现了当前最优的线性收敛速率,且对原始与对偶条件数具有精细化的依赖关系。理论结果通过回归与分类任务的数值基准测试得到验证。