The Robust Satisficing (RS) model is an emerging approach to robust optimization, offering streamlined procedures and robust generalization across various applications. However, the statistical theory of RS remains unexplored in the literature. This paper fills in the gap by comprehensively analyzing the theoretical properties of the RS model. Notably, the RS structure offers a more straightforward path to deriving statistical guarantees compared to the seminal Distributionally Robust Optimization (DRO), resulting in a richer set of results. In particular, we establish two-sided confidence intervals for the optimal loss without the need to solve a minimax optimization problem explicitly. We further provide finite-sample generalization error bounds for the RS optimizer. Importantly, our results extend to scenarios involving distribution shifts, where discrepancies exist between the sampling and target distributions. Our numerical experiments show that the RS model consistently outperforms the baseline empirical risk minimization in small-sample regimes and under distribution shifts. Furthermore, compared to the DRO model, the RS model exhibits lower sensitivity to hyperparameter tuning, highlighting its practicability for robustness considerations.
翻译:鲁棒满足性(RS)模型是一种新兴的鲁棒优化方法,其在多种应用中提供了简化的流程与鲁棒的泛化性能。然而,现有文献尚未对RS的统计理论进行探索。本文通过全面分析RS模型的理论性质填补了这一空白。值得注意的是,与经典的分布鲁棒优化(DRO)相比,RS结构为推导统计保证提供了更直接的路径,从而得到了更丰富的结果。特别地,我们在无需显式求解极小极大优化问题的情况下,为最优损失建立了双侧置信区间。我们进一步给出了RS优化器的有限样本泛化误差界。重要的是,我们的结果可推广至存在采样分布与目标分布差异的分布偏移场景。数值实验表明,在小样本机制及分布偏移条件下,RS模型始终优于基准经验风险最小化方法。此外,与DRO模型相比,RS模型对超参数调优表现出更低的敏感性,这凸显了其在鲁棒性考量方面的实用性。