Although conformal prediction is a promising method for quantifying the uncertainty of machine learning models, the prediction sets it outputs are not inherently actionable. Many applications require a single output to act on, not several. To overcome this, prediction sets can be provided to a human who then makes an informed decision. In any such system it is crucial to ensure the fairness of outcomes across protected groups, and researchers have proposed that Equalized Coverage be used as the standard for fairness. By conducting experiments with human participants, we demonstrate that providing prediction sets can increase the unfairness of their decisions. Disquietingly, we find that providing sets that satisfy Equalized Coverage actually increases unfairness compared to marginal coverage. Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
翻译:尽管共形预测是量化机器学习模型不确定性的一种有前景的方法,但其输出的预测集本身并不具备直接可操作性。许多应用场景需要单一输出以供决策,而非多个可能结果。为解决这一问题,可将预测集提供给人类决策者以辅助其做出知情判断。在此类系统中,确保受保护群体间的结果公平性至关重要,研究者已提出将"均衡覆盖"作为公平性标准。通过开展人类受试者实验,我们证明提供预测集可能加剧决策结果的不公平性。令人不安的是,我们发现相较于边际覆盖,满足均衡覆盖要求的预测集反而会加剧不公平现象。我们提出不应追求覆盖率的均衡,而应实现跨群体预测集规模的均衡,实证表明这种方法能带来更公平的结果。