Disparate impact doctrine offers an important legal apparatus for targeting discriminatory data-driven algorithmic decisions. A recent body of work has focused on conceptualizing one particular construct from this doctrine: the less discriminatory alternative, an alternative policy that reduces disparities while meeting the same business needs of a status quo or baseline policy. However, attempts to operationalize this construct in the algorithmic setting must grapple with some thorny challenges and ambiguities. In this paper, we attempt to raise and resolve important questions about less discriminatory algorithms (LDAs). How should we formally define LDAs, and how does this interact with different societal goals they might serve? And how feasible is it for firms or plaintiffs to computationally search for candidate LDAs? We find that formal LDA definitions face fundamental challenges when they attempt to evaluate and compare predictive models in the absence of held-out data. As a result, we argue that LDA definitions cannot be purely quantitative, and must rely on standards of "reasonableness." We then identify both mathematical and computational constraints on firms' ability to efficiently conduct a proactive search for LDAs, but we provide evidence that these limits are "weak" in a formal sense. By defining LDAs formally, we put forward a framework in which both firms and plaintiffs can search for alternative models that comport with societal goals.
翻译:差异影响学说为针对数据驱动的算法决策中的歧视问题提供了一个重要的法律工具。最近的一系列研究聚焦于从该学说中概念化一个特定结构:更少歧视的替代方案,即一种在满足现状或基线政策相同业务需求的同时减少差异的替代政策。然而,在算法环境中尝试将这一结构操作化,必须应对一些棘手的挑战和模糊性。在本文中,我们试图提出并解决关于更少歧视算法(LDAs)的重要问题。我们应如何正式定义LDA,以及这如何与其可能服务的不同社会目标相互作用?对于公司或原告而言,通过计算方式搜索候选LDA的可行性如何?我们发现,在缺乏保留数据的情况下,正式的LDA定义在尝试评估和比较预测模型时面临根本性挑战。因此,我们认为LDA的定义不能纯粹是定量的,必须依赖于“合理性”的标准。接着,我们识别了公司在主动搜索LDA能力方面面临的数学和计算约束,但我们提供的证据表明,这些限制在形式意义上是“弱”的。通过正式定义LDA,我们提出了一个框架,使得公司和原告都可以搜索符合社会目标的替代模型。