Recent conversations in the algorithmic fairness literature have raised several concerns with standard conceptions of fairness. First, constraining predictive algorithms to satisfy fairness benchmarks may lead to non-optimal outcomes for disadvantaged groups. Second, technical interventions are often ineffective by themselves, especially when divorced from an understanding of structural processes that generate social inequality. Inspired by both these critiques, we construct a common decision-making model, using mortgage loans as a running example. We show that under some conditions, any choice of decision threshold will inevitably perpetuate existing disparities in financial stability unless one deviates from the Pareto optimal policy. Then, we model the effects of three different types of interventions. We show how different interventions are recommended depending upon the difficulty of enacting structural change upon external parameters and depending upon the policymaker's preferences for equity or efficiency. Counterintuitively, we demonstrate that preferences for efficiency over equity may lead to recommendations for interventions that target the under-resourced group. Finally, we simulate the effects of interventions on a dataset that combines HMDA and Fannie Mae loan data. This research highlights the ways that structural inequality can be perpetuated by seemingly unbiased decision mechanisms, and it shows that in many situations, technical solutions must be paired with external, context-aware interventions to enact social change.
翻译:近来,算法公平性文献中的讨论对标准公平性概念提出了若干关切。首先,约束预测算法以满足公平性基准可能导致弱势群体获得非最优结果。其次,技术干预本身往往效果有限,尤其当脱离对社会不平等生成的结构性过程的理解时。受这两类批评的启发,我们以抵押贷款为例构建了一个通用决策模型。研究表明,在某些条件下,除非偏离帕累托最优策略,否则任何决策阈值的选择都不可避免地延续金融稳定性方面的现有差距。随后,我们对三类不同干预措施的效果进行建模。研究揭示了根据外部参数结构性变革的实施难度,以及政策制定者对公平与效率的偏好,应如何选择不同的干预措施。反直觉的是,我们证明对效率优于公平的偏好可能导致建议针对资源不足群体实施干预。最后,我们结合HMDA和房利美贷款数据,模拟了干预措施的效果。本研究揭示了结构性不平等如何通过看似无偏的决策机制得以延续,并表明在许多情境下,技术解决方案必须与外部、情境感知的干预措施相结合,才能实现社会变革。