Performative prediction aims to model scenarios where predictive outcomes subsequently influence the very systems they target. The pursuit of a performative optimum (PO) -- minimizing performative risk -- is generally reliant on modeling of the distribution map, which characterizes how a deployed ML model alters the data distribution. Unfortunately, inevitable misspecification of the distribution map can lead to a poor approximation of the true PO. To address this issue, we introduce a novel framework of distributionally robust performative prediction and study a new solution concept termed as distributionally robust performative optimum (DRPO). We show provable guarantees for DRPO as a robust approximation to the true PO when the nominal distribution map is different from the actual one. Moreover, distributionally robust performative prediction can be reformulated as an augmented performative prediction problem, enabling efficient optimization. The experimental results demonstrate that DRPO offers potential advantages over traditional PO approach when the distribution map is misspecified at either micro- or macro-level.
翻译:表演性预测旨在建模预测结果随后影响其所针对系统本身的情景。追求表演性最优(PO)——即最小化表演性风险——通常依赖于对分布映射的建模,该映射描述了已部署的机器学习模型如何改变数据分布。遗憾的是,分布映射不可避免的错误设定可能导致对真实PO的近似效果不佳。为解决此问题,我们引入了一个新颖的分布鲁棒表演性预测框架,并研究了一种称为分布鲁棒表演性最优(DRPO)的新解决方案概念。我们证明,当名义分布映射与实际分布映射存在差异时,DRPO作为真实PO的鲁棒近似具有可证明的保证。此外,分布鲁棒的表演性预测可以重新表述为一个增强的表演性预测问题,从而实现高效优化。实验结果表明,当分布映射在微观或宏观层面存在错误设定时,DRPO相较于传统的PO方法展现出潜在优势。