Counterfactual explanations provide ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be leveraged to reconstruct the model by strategically training a surrogate model to give similar predictions as the original (target) model. In this work, we analyze how model reconstruction using counterfactuals can be improved by further leveraging the fact that the counterfactuals also lie quite close to the decision boundary. Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries required using polytope theory. Our theoretical analysis leads us to propose a strategy for model reconstruction that we call Counterfactual Clamping Attack (CCA) which trains a surrogate model using a unique loss function that treats counterfactuals differently than ordinary instances. Our approach also alleviates the related problem of decision boundary shift that arises in existing model reconstruction approaches when counterfactuals are treated as ordinary instances. Experimental results demonstrate that our strategy improves fidelity between the target and surrogate model predictions on several datasets.
翻译:反事实解释提供了以最小输入扰动实现有利模型结果的方法。然而,反事实解释也可被用于重构模型,其策略是训练一个替代模型,使其给出与原始(目标)模型相似的预测。在本工作中,我们通过进一步利用反事实样本也位于决策边界附近这一事实,分析了如何改进利用反事实的模型重构。我们的主要贡献是运用多面体理论,推导出模型重构误差与所需反事实查询数量之间的新颖理论关系。我们的理论分析引导我们提出一种称为反事实钳位攻击(CCA)的模型重构策略,该策略使用一种独特的损失函数训练替代模型,该函数以不同于普通实例的方式处理反事实样本。我们的方法也缓解了现有模型重构方法中将反事实样本视为普通实例时出现的决策边界偏移问题。实验结果表明,我们的策略在多个数据集上提高了目标模型与替代模型预测之间的保真度。