Conformal prediction (CP) is an important tool for distribution-free predictive uncertainty quantification. Yet, a major challenge is to balance computational efficiency and prediction accuracy, particularly for multiple predictions. We propose Leave-One-Out Stable Conformal Prediction (LOO-StabCP), a novel method to speed up full conformal using algorithmic stability without sample splitting. By leveraging leave-one-out stability, our method is much faster in handling a large number of prediction requests compared to existing method RO-StabCP based on replace-one stability. We derived stability bounds for several popular machine learning tools: regularized loss minimization (RLM) and stochastic gradient descent (SGD), as well as kernel method, neural networks and bagging. Our method is theoretically justified and demonstrates superior numerical performance on synthetic and real-world data. We applied our method to a screening problem, where its effective exploitation of training data led to improved test power compared to state-of-the-art method based on split conformal.
翻译:共形预测(CP)是一种重要的无分布预测不确定性量化工具。然而,其主要挑战在于平衡计算效率与预测准确性,特别是在进行多次预测时。本文提出留一稳定共形预测(LOO-StabCP),这是一种无需样本分割、利用算法稳定性加速完全共形预测的新方法。通过利用留一稳定性,与基于替换一稳定性的现有方法RO-StabCP相比,我们的方法在处理大量预测请求时速度显著提升。我们推导了多种常用机器学习工具的稳定性边界:正则化损失最小化(RLM)和随机梯度下降(SGD),以及核方法、神经网络和装袋法。我们的方法具有理论依据,并在合成数据与真实数据上展现出优越的数值性能。我们将该方法应用于筛选问题,其有效利用训练数据的特点,相比基于分割共形的前沿方法,实现了检验功效的提升。