Post-hoc explainability is essential for understanding black-box machine learning models. Surrogate-based techniques are widely used for local and global model-agnostic explanations but have significant limitations. Local surrogates capture non-linearities but are computationally expensive and sensitive to parameters, while global surrogates are more efficient but struggle with complex local behaviors. In this paper, we present ILLUME, a flexible and interpretable framework grounded in representation learning, that can be integrated with various surrogate models to provide explanations for any black-box classifier. Specifically, our approach combines a globally trained surrogate with instance-specific linear transformations learned with a meta-encoder to generate both local and global explanations. Through extensive empirical evaluations, we demonstrate the effectiveness of ILLUME in producing feature attributions and decision rules that are not only accurate but also robust and computationally efficient, thus providing a unified explanation framework that effectively addresses the limitations of traditional surrogate methods.
翻译:事后可解释性对于理解黑盒机器学习模型至关重要。基于代理的技术被广泛用于局部和全局的模型无关解释,但存在显著局限性。局部代理方法能捕捉非线性特征,但计算成本高昂且对参数敏感;全局代理方法效率更高,却难以处理复杂的局部行为。本文提出ILLUME——一个基于表示学习的灵活可解释框架,可与多种代理模型集成,为任意黑盒分类器提供解释。具体而言,我们的方法将全局训练的代理模型与通过元编码学习的实例特定线性变换相结合,从而同时生成局部与全局解释。通过大量实证评估,我们证明ILLUME在生成特征归因和决策规则方面不仅具有高准确性,同时兼具鲁棒性和计算高效性,从而提供了一个能有效解决传统代理方法局限性的统一解释框架。