The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
翻译:机器学习模型在高风险领域的广泛应用可能产生重大负面影响,特别是对那些获得不利结果的个体。算法追索为这类个体提供了最小成本改进建议,使其未来能够获得理想结果。然而,机器学习模型通常会随时间更新,这可能导致追索建议失效(即无法导向理想结果)。鲁棒追索研究旨在选择对模型变化敏感性较低的追索方案,即使面对对抗性模型变更也能保持有效,但这通常伴随着更高的成本。为克服这一障碍,我们首次通过学习增强框架研究算法追索问题,评估当设计者具备未来模型变化的预测能力时,如何在预测准确时降低追索成本(一致性),同时在预测不准确时仍能控制成本(鲁棒性)。我们针对该问题提出了一种新颖算法,研究了鲁棒性与一致性的权衡关系,并分析了预测准确性对性能的影响。