Training machine learning and statistical models often involves optimizing a data-driven risk criterion. The risk is usually computed with respect to the empirical data distribution, but this may result in poor and unstable out-of-sample performance due to distributional uncertainty. In the spirit of distributionally robust optimization, we propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences. First, we highlight novel connections with standard regularized empirical risk minimization techniques, among which Ridge and LASSO regressions. Then, we theoretically demonstrate the existence of favorable finite-sample and asymptotic statistical guarantees on the performance of the robust optimization procedure. For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations. We also show that the smoothness of the criterion naturally leads to standard gradient-based numerical optimization. Finally, we provide insights into the workings of our method by applying it to a variety of tasks based on simulated and real datasets.
翻译:训练机器学习和统计模型通常涉及优化数据驱动的风险准则。该风险通常针对经验数据分布进行计算,但由于分布不确定性,这可能导致较差的样本外性能和不稳定性。秉承分布鲁棒优化的精神,我们通过结合贝叶斯非参数(即狄利克雷过程)理论和最近关于平滑模糊厌恶偏好的决策理论模型的见解,提出了一种新颖的鲁棒准则。首先,我们强调了其与标准正则化经验风险最小化技术(包括Ridge和LASSO回归)的新联系。然后,我们从理论上证明了该鲁棒优化过程在有限样本和渐近统计性能上存在有利的保证。对于实际实现,我们提出并研究了基于著名狄利克雷过程表示的该准则的易处理近似。我们还证明了该准则的平滑性自然导向基于梯度的标准数值优化方法。最后,我们通过在模拟和真实数据集上的多种任务中应用该方法,深入揭示了其工作机制。