In recent years many important societal decisions are made by machine-learning algorithms, and many such important decisions have strict capacity limits, allowing resources to be allocated only to the highest utility individuals. For example, allocating physician appointments to the patients most likely to have some medical condition, or choosing which children will attend a special program. When performing such decisions, we consider both the prediction aspect of the decision and the resource allocation aspect. In this work we focus on the fairness of the decisions in such settings. The fairness aspect here is critical as the resources are limited, and allocating the resources to one individual leaves less resources for others. When the decision involves prediction together with the resource allocation, there is a risk that information gaps between different populations will lead to a very unbalanced allocation of resources. We address settings by adapting definitions from resource allocation schemes, identifying connections between the algorithmic fairness definitions and resource allocation ones, and examining the trade-offs between fairness and utility. We analyze the price of enforcing the different fairness definitions compared to a strictly utility-based optimization of the predictor, and show that it can be unbounded. We introduce an adaptation of proportional fairness and show that it has a bounded price of fairness, indicating greater robustness, and propose a variant of equal opportunity that also has a bounded price of fairness.
翻译:近年来,许多重要的社会决策由机器学习算法做出,而此类重要决策往往存在严格的容量限制,仅允许将资源分配给效用最高的个体。例如,将医生预约分配给最可能患有某种疾病的患者,或选择哪些儿童参加特殊项目。在执行此类决策时,我们同时考虑决策的预测层面和资源分配层面。本研究聚焦于此种情境下决策的公平性问题。由于资源有限,将资源分配给某个个体意味着其他个体可获得的资源减少,因此公平性在此至关重要。当决策同时涉及预测和资源分配时,不同群体间的信息差异可能导致资源分配严重失衡的风险。我们通过调整资源分配方案中的定义来应对此类情境,建立算法公平性定义与资源分配定义之间的关联,并审视公平性与效用之间的权衡关系。我们分析了相较于严格基于效用的预测器优化,强制执行不同公平性定义所需付出的代价,并证明该代价可能是无界的。我们引入比例公平性的适应性定义,证明其具有有界的公平性代价,表明更强的鲁棒性,并提出一种具有有界公平性代价的机会均等变体。