There are ongoing discussions about predictive policing systems, such as those deployed in Los Angeles, California and Baltimore, Maryland, being unfair, for example, by exhibiting racial bias. Studies found that unfairness may be due to feedback loops and being trained on historically biased recorded data. However, comparative studies on predictive policing systems are few and are not sufficiently comprehensive. In this work, we perform a comprehensive comparative simulation study on the fairness and accuracy of predictive policing technologies in Baltimore. Our results suggest that the situation around bias in predictive policing is more complex than was previously assumed. While predictive policing exhibited bias due to feedback loops as was previously reported, we found that the traditional alternative, hot spots policing, had similar issues. Predictive policing was found to be more fair and accurate than hot spots policing in the short term, although it amplified bias faster, suggesting the potential for worse long-run behavior. In Baltimore, in some cases the bias in these systems tended toward over-policing in White neighborhoods, unlike in previous studies. Overall, this work demonstrates a methodology for city-specific evaluation and behavioral-tendency comparison of predictive policing systems, showing how such simulations can reveal inequities and long-term tendencies.
翻译:关于预测性警务系统(如加利福尼亚州洛杉矶市与马里兰州巴尔的摩市部署的系统)存在持续争议,例如其表现出种族偏见等不公平现象。研究发现,不公平性可能源于反馈循环及训练数据所包含的历史性偏见记录。然而,当前针对预测性警务系统的比较研究数量有限且不够全面。本研究对巴尔的摩市预测性警务技术的公平性与准确性进行了全面的比较模拟分析。结果表明,预测性警务中的偏见问题较先前假设更为复杂:虽然如既往报道所述,预测性警务因反馈循环存在偏见,但我们发现传统替代方案——热点警务——存在类似问题。短期而言,预测性警务较热点警务更为公平准确,但其偏见放大速度更快,暗示长期可能产生更恶劣的效应。在巴尔的摩的某些案例中,这些系统的偏见倾向于导致白人社区过度监管,这与先前研究结论相异。总体而言,本研究提出了一种针对特定城市的预测性警务系统评估与行为趋势比较方法,揭示了此类模拟如何有效揭露系统不公与长期趋势。