This article studies how to intervene against statistical discrimination, when it is based on beliefs generated by machine learning, rather than by humans. Unlike beliefs formed by a human mind, machine learning-generated beliefs are verifiable. This allows interventions to move beyond simple, belief-free designs like affirmative action, to more sophisticated ones, that constrain decision makers in ways that depend on what they are thinking. Such mind reading interventions can perform well where affirmative action does not, even when the beliefs being conditioned on are possibly incorrect and biased.
翻译:本文研究如何干预基于机器学习(而非人类)产生的信念所导致的统计歧视。与人类心智形成的信念不同,机器学习生成的信念是可验证的。这使得干预措施能够超越简单的、脱离信念的设计(如平权行动),转向更复杂的方案——根据决策者的思维方式对其施加约束。这种"读心"干预即使在所依赖的信念可能存在错误和偏差的情况下,也能在平权行动效果不佳的领域表现出色。