The growing philosophical literature on algorithmic fairness has examined statistical criteria such as equalized odds and calibration, causal and counterfactual approaches, and the role of structural and compounding injustices. Yet an important dimension has been overlooked: whether the evidential value of an algorithmic output itself depends on structural injustice. Our paradigmatic pair of examples contrasts a predictive policing algorithm, which relies on historical crime data, with a camera-based system that records ongoing offenses, both designed to guide police deployment. In evaluating the moral acceptability of acting on a piece of evidence, we must ask not only whether the evidence is probative in the actual world, but also whether it would remain probative in nearby worlds without the relevant injustices. The predictive policing algorithm fails this test, but the camera-based system passes it. When evidence fails the test, it is morally problematic to use it punitively, more so than evidence that passes the test.
翻译:关于算法公平性的哲学文献日益增多,已探讨了诸如均衡赔率与校准等统计标准、因果与反事实方法,以及结构性不公与复合性不公的作用。然而一个重要维度被忽视了:算法输出本身的证据价值是否依赖于结构性不公。我们通过一组典型示例进行对比:一个依赖历史犯罪数据的预测性警务算法,与一个记录实时违法行为的摄像头系统,两者皆旨在指导警力部署。在评估依据某项证据采取行动的道德可接受性时,我们不仅需要考察该证据在现实世界中是否具有证明力,还需追问在消除相关不公的邻近可能世界中它是否仍具证明力。预测性警务算法未能通过此项检验,而摄像头系统则通过了检验。当证据未能通过检验时,将其用于惩罚目的将产生道德问题,其严重程度远超通过检验的证据。