When a machine learning model is deployed, its predictions can alter its environment, as better informed agents strategize to suit their own interests. With such alterations in mind, existing approaches to uncertainty quantification break. In this work we propose a new framework, Strategic Conformal Prediction, which is capable of robust uncertainty quantification in such a setting. Strategic Conformal Prediction is backed by a series of theoretical guarantees spanning marginal coverage, training-conditional coverage, tightness and robustness to misspecification that hold in a distribution-free manner. Experimental analysis further validates our method, showing its remarkable effectiveness in face of arbitrary strategic alterations, whereas other methods break.
翻译:当机器学习模型部署时,其预测可能改变其环境,因为信息更充分的智能体会根据自身利益进行策略性调整。考虑到此类环境变化,现有的不确定性量化方法将失效。本文提出一种新框架——战略保形预测,该框架能够在此类场景下实现稳健的不确定性量化。战略保形预测具有一系列理论保证,包括边际覆盖、训练条件覆盖、紧致性以及对模型设定错误的鲁棒性,这些保证以分布无关的方式成立。实验分析进一步验证了本方法的有效性,表明其在面对任意策略性环境变化时具有显著优势,而其他方法则失效。