The debate between self-interpretable models and post-hoc explanations for black-box models is central to Explainable AI (XAI). Self-interpretable models, such as concept-based networks, offer insights by connecting decisions to human-understandable concepts but often struggle with performance and scalability. Conversely, post-hoc methods like Shapley values, while theoretically robust, are computationally expensive and resource-intensive. To bridge the gap between these two lines of research, we propose a novel method that combines their strengths, providing theoretically guaranteed self-interpretability for black-box models without compromising prediction accuracy. Specifically, we introduce a parameter-efficient pipeline, *AutoGnothi*, which integrates a small side network into the black-box model, allowing it to generate Shapley value explanations without changing the original network parameters. This side-tuning approach significantly reduces memory, training, and inference costs, outperforming traditional parameter-efficient methods, where full fine-tuning serves as the optimal baseline. *AutoGnothi* enables the black-box model to predict and explain its predictions with minimal overhead. Extensive experiments show that *AutoGnothi* offers accurate explanations for both vision and language tasks, delivering superior computational efficiency with comparable interpretability.
翻译:可解释人工智能(XAI)领域的核心争论在于自解释模型与黑盒模型事后解释方法之间的选择。基于概念的网络等自解释模型通过将决策过程与人类可理解的概念相关联来提供洞见,但通常在性能和可扩展性方面存在局限。相反,Shapley值等事后解释方法虽具有理论鲁棒性,却存在计算成本高昂和资源密集的问题。为弥合这两类研究方向的鸿沟,本文提出一种融合双方优势的创新方法,在保持预测精度的同时为黑盒模型提供理论可证的自解释能力。具体而言,我们设计了一种参数高效的流程*AutoGnothi*,通过在黑盒模型中集成小型侧支网络,使其能够在不改变原始网络参数的情况下生成Shapley值解释。这种侧支调优方法显著降低了内存占用、训练与推理成本,其性能超越以全参数微调为最优基准的传统参数高效方法。*AutoGnothi*使黑盒模型能够以最小开销实现预测与预测解释的同步生成。大量实验表明,*AutoGnothi*在视觉与语言任务中均能提供精确解释,在保持可比解释性的同时实现了卓越的计算效率。