Machine learning models are often required to perform well across several pre-defined settings, such as a set of user groups. Worst-case performance is a common metric to capture this requirement, and is the objective of group distributionally robust optimization (group DRO). Unfortunately, these methods struggle when the loss is non-convex in the parameters, or the model class is non-parametric. Here, we make a classical move to address this: we reparameterize group DRO from parameter space to function space, which results in a number of advantages. First, we show that group DRO over the space of bounded functions admits a minimax theorem. Second, for cross-entropy and mean squared error, we show that the minimax optimal mixture distribution is the solution of a simple convex optimization problem. Thus, provided one is working with a model class of universal function approximators, group DRO can be solved by a convex optimization problem followed by a classical risk minimization problem. We call our method MixMax. In our experiments, we found that MixMax matched or outperformed the standard group DRO baselines, and in particular, MixMax improved the performance of XGBoost over the only baseline, data balancing, for variations of the ACSIncome and CelebA annotations datasets.
翻译:机器学习模型通常需要在多个预定义场景(如一组用户群体)中均表现良好。最坏情况性能是捕捉这一需求的常用指标,也是群体分布鲁棒优化(group DRO)的目标。然而,当损失函数在参数空间非凸或模型类别为非参数时,这些方法往往难以奏效。本文采用经典思路解决此问题:将群体DRO从参数空间重新参数化到函数空间,从而获得若干优势。首先,我们证明在有限函数空间上的群体DRO满足极小极大定理。其次,对于交叉熵和均方误差损失,我们证明极小极大最优混合分布可通过简单凸优化问题求解。因此,只要使用具有通用函数逼近能力的模型类别,群体DRO问题就可转化为先求解凸优化问题、再进行经典风险最小化问题的两阶段流程。我们将此方法命名为MixMax。实验表明,MixMax达到或超越了标准群体DRO基线的性能,特别是在ACSIncome和CelebA标注数据集的变体上,MixMax显著提升了XGBoost相对于唯一基线方法(数据平衡)的表现。