Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
翻译:为识别和缓解机器学习系统中的不公平输出,学界已发展出多种度量指标与干预手段。尽管个人及组织负有避免歧视的义务,但在欧盟非歧视法律框架下,采用公平性感知的机器学习干预措施也被描述为构成"算法层面的积极行动"。鉴于欧盟法院在评估积极行动的合法性时采取严格立场,这将对意图实施公平机器学习干预的主体施加重大法律负担。本文提出,算法公平干预措施往往应被理解为预防歧视的手段,而非积极行动举措。具体而言,我们认为此类范畴错误常可归因于中立性谬误——即对公平感知算法决策中立性的错误假设。本研究结论引发质疑:在算法决策情境中,单纯履行不歧视的消极义务是否足够?因此,我们主张将责任框架从"不作为伤害"的消极义务转向主动"不造成伤害"的积极义务,以此作为算法决策与公平机器学习干预的更适切框架。