Privacy protection and uncertainty quantification are increasingly important in data-driven decision making. Conformal prediction provides finite-sample marginal coverage, but existing private approaches often rely on data splitting, reducing the effective sample size. We propose a full-data privacy-preserving conformal prediction framework that avoids splitting. Our framework leverages stability induced by differential privacy to control the gap between in-sample and out-of-sample conformal scores, and pairs this with a conservative private quantile routine designed to prevent under-coverage. We show that a generic differential privacy guarantee yields a universal coverage floor, yet cannot generally recover the nominal $1-α$ level. We then provide a refined, mechanism-specific stability analysis and yields asymptotic recovery of the nominal level. Experiments demonstrate sharper prediction sets than the split-based private baseline.
翻译:隐私保护与不确定性量化在数据驱动决策中日益重要。保形预测能提供有限样本的边际覆盖,但现有的隐私保护方法通常依赖数据分割,这会降低有效样本量。我们提出一种避免分割的全数据隐私保护保形预测框架。该框架利用差分隐私所诱导的稳定性来控制样本内与样本外保形分数之间的差距,并将其与一个旨在防止覆盖不足的保守私有分位数计算程序相结合。我们证明,通用的差分隐私保证会产生一个普适的覆盖下限,但通常无法恢复名义上的 $1-α$ 覆盖水平。随后,我们提供了一种精细的、针对特定隐私机制的稳定性分析,该分析能够渐进地恢复名义覆盖水平。实验表明,与基于分割的隐私保护基线方法相比,我们的框架能产生更精确的预测集。