This paper studies Distributionally Robust Optimization (DRO), a fundamental framework for enhancing the robustness and generalization of statistical learning and optimization. An effective ambiguity set for DRO must involve distributions that remain consistent to the nominal distribution while being diverse enough to account for a variety of potential scenarios. Moreover, it should lead to tractable DRO solutions. To this end, we propose generative model-based ambiguity sets that capture various adversarial distributions beyond the nominal support space while maintaining consistency with the nominal distribution. Building on this generative ambiguity modeling, we propose DRO with Generative Ambiguity Set (GAS-DRO), a tractable DRO algorithm that solves the inner maximization over the parameterized generative model space. We formally establish the stationary convergence performance of GAS-DRO. We implement GAS-DRO with a diffusion model and empirically demonstrate its superior Out-of-Distribution (OOD) generalization performance in ML tasks.
翻译:本文研究分布鲁棒优化(DRO),这是一个用于增强统计学习与优化鲁棒性和泛化能力的基础框架。一个有效的DRO模糊集必须包含与名义分布保持一致,同时具有足够多样性以涵盖各类潜在场景的分布。此外,该模糊集应能导出可处理的DRO解。为此,我们提出基于生成模型的模糊集,该集合能捕捉超出名义支撑空间的各种对抗分布,同时保持与名义分布的一致性。基于此生成式模糊建模,我们提出具有生成式模糊集的分布鲁棒优化(GAS-DRO),这是一种可处理的DRO算法,通过求解参数化生成模型空间上的内部最大化问题来实现。我们严格建立了GAS-DRO的平稳收敛性能。我们采用扩散模型实现了GAS-DRO,并通过实验验证了其在机器学习任务中卓越的分布外(OOD)泛化性能。