In recent years, the number of new applications for highly complex AI systems has risen significantly. Algorithmic decision-making systems (ADMs) are one of such applications, where an AI system replaces the decision-making process of a human expert. As one approach to ensure fairness and transparency of such systems, explainable AI (XAI) has become more important. One variant to achieve explainability are surrogate models, i.e., the idea to train a new simpler machine learning model based on the input-output-relationship of a black box model. The simpler machine learning model could, for example, be a decision tree, which is thought to be intuitively understandable by humans. However, there is not much insight into how well the surrogate model approximates the black box. Our main assumption is that a good surrogate model approach should be able to bring such a discriminating behavior to the attention of humans; prior to our research we assumed that a surrogate decision tree would identify such a pattern on one of its first levels. However, in this article we show that even if the discriminated subgroup - while otherwise being the same in all categories - does not get a single positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted by the operator of the system. We then generalize this finding to pinpoint the exact level of the tree on which the discriminating question is asked and show that in a more realistic scenario, where discrimination only occurs to some fraction of the disadvantaged group, it is even more feasible to hide such discrimination. Our approach can be generalized easily to other surrogate models.
翻译:近年来,高度复杂人工智能系统的新应用数量显著增加。算法决策系统(ADMs)是此类应用之一,其中人工智能系统取代了人类专家的决策过程。作为确保此类系统公平性和透明度的一种方法,可解释人工智能(XAI)变得愈发重要。实现可解释性的一种变体是替代模型,即基于黑盒模型的输入-输出关系训练一个新的、更简单的机器学习模型。例如,这个更简单的机器学习模型可以是决策树,人们认为它直观易懂。然而,对于替代模型在多大程度上近似于黑盒模型,目前缺乏深入的见解。我们的主要假设是,一个良好的替代模型方法应该能够将这种歧视性行为引起人类的注意;在我们研究之前,我们假设替代决策树会在其前几层识别出这种模式。然而,在本文中,我们证明,即使被歧视的亚组——尽管在所有其他类别上都相同——没有从黑盒ADM系统获得任何一个积极决策,系统操作员仍可将相应的群体成员资格问题推低到任意低的层级。随后,我们将这一发现推广到精确定位决策树中提出歧视性问题的确切层级,并证明在一个更现实的场景中(歧视仅发生在部分弱势群体成员身上),隐藏此类歧视甚至更为可行。我们的方法可以轻松推广到其他替代模型。