Machine learning models are often used to decide who receives a loan, a job interview, or a public benefit. Models in such settings use features without considering their actionability. As a result, they can assign predictions that are fixed $-$ meaning that individuals who are denied loans and interviews are, in fact, precluded from access to credit and employment. In this work, we introduce a procedure called recourse verification to test if a model assigns fixed predictions to its decision subjects. We propose a model-agnostic approach for recourse verification with reachable sets $-$ i.e., the set of all points that a person can reach through their actions in feature space. We develop methods to construct reachable sets for discrete feature spaces, which can certify the responsiveness of any model by simply querying its predictions. We conduct a comprehensive empirical study on the infeasibility of recourse on datasets from consumer finance. Our results highlight how models can inadvertently preclude access by assigning fixed predictions and underscore the need to account for actionability in model development.
翻译:机器学习模型常被用于决定谁可以获得贷款、面试机会或公共福利。此类场景中的模型在特征选择时未考虑其可操作性,导致可能生成不可改变的预测结果——即被拒绝贷款和面试的个体实质上被剥夺了获得信贷和就业的机会。本研究提出一种名为"补救验证"的方法,用于检测模型是否对其决策对象产生固定预测。我们提出了一种与模型无关的补救验证框架,通过可达集(即个体通过特征空间中的行动所能抵达的所有点的集合)实现。我们开发了针对离散特征空间的可达集构建方法,通过简单查询模型预测结果即可验证任意模型的响应性。基于消费者金融数据集的实证研究全面揭示了补救措施不可行性的普遍存在。研究结果警示了模型因生成固定预测而无意间剥夺个体权利的风险,并强调在模型开发中必须考虑特征的可操作性。