This paper introduces a conformal inference method to evaluate uncertainty in classification by generating prediction sets with valid coverage conditional on adaptively chosen features. These features are carefully selected to reflect potential model limitations or biases. This can be useful to find a practical compromise between efficiency -- by providing informative predictions -- and algorithmic fairness -- by ensuring equalized coverage for the most sensitive groups. We demonstrate the validity and effectiveness of this method on simulated and real data sets.
翻译:本文提出一种保形推理方法,用于评估分类任务中的不确定性,该方法通过生成具有条件有效覆盖的预测集合来实现,其条件基于自适应选择的特征。这些特征经过精心选择,以反映模型的潜在局限性或偏差。该方法可在效率(通过提供信息丰富的预测)与算法公平性(通过确保最敏感群体的等覆盖)之间找到实际折衷方案。我们在模拟和真实数据集上验证了该方法的有效性和实用性。