Conformal Prediction (CP) is a popular method for uncertainty quantification with machine learning models. While conformal prediction provides probabilistic guarantees regarding the coverage of the true label, these guarantees are agnostic to the presence of sensitive attributes within the dataset. In this work, we formalize \textit{Conformal Fairness}, a notion of fairness using conformal predictors, and provide a theoretically well-founded algorithm and associated framework to control for the gaps in coverage between different sensitive groups. Our framework leverages the exchangeability assumption (implicit to CP) rather than the typical IID assumption, allowing us to apply the notion of Conformal Fairness to data types and tasks that are not IID, such as graph data. Experiments were conducted on graph and tabular datasets to demonstrate that the algorithm can control fairness-related gaps in addition to coverage aligned with theoretical expectations.
翻译:共形预测(Conformal Prediction, CP)是一种用于机器学习模型不确定性量化的流行方法。虽然共形预测提供了关于真实标签覆盖率的概率保证,但这些保证对数据集中敏感属性的存在是不可知的。在这项工作中,我们形式化了**共形公平性**这一利用共形预测器的公平性概念,并提供了一个理论基础完善的算法及相关框架,以控制不同敏感群体间覆盖率的差距。我们的框架利用了(共形预测中隐含的)可交换性假设,而非典型的独立同分布假设,这使得我们能够将共形公平性的概念应用于非独立同分布的数据类型和任务,例如图数据。在图数据和表格数据上进行的实验表明,该算法除了能控制与理论预期一致的覆盖率外,还能控制与公平性相关的差距。