While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible open-source framework for evaluating and benchmarking post hoc explanation methods. OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets. OpenXAI is easily extensible, as users can readily evaluate custom explanation methods and incorporate them into our leaderboards. Overall, OpenXAI provides an automated end-to-end pipeline that not only simplifies and standardizes the evaluation of post hoc explanation methods, but also promotes transparency and reproducibility in benchmarking these methods. While the first release of OpenXAI supports only tabular datasets, the explanation methods and metrics that we consider are general enough to be applicable to other data modalities. OpenXAI datasets and models, implementations of state-of-the-art explanation methods and evaluation metrics, are publicly available at this GitHub link.
翻译:尽管近期文献中提出了几种事后解释方法,但针对这些方法进行系统性基准测试的研究仍十分有限。本文介绍OpenXAI——一个用于评估与基准测试事后解释方法的全面、可扩展的开源框架。OpenXAI包含以下关键组件:(i)灵活合成数据生成器、多样化真实世界数据集、预训练模型及最先进特征归因方法集合;(ii)用于评估解释方法忠实度、稳定性(鲁棒性)与公平性的十一种量化度量的开源实现,进而提供多种解释方法在广泛度量、模型及数据集上的对比。OpenXAI易于扩展,用户可快速评估自定义解释方法并将其纳入我们的排行榜。总体而言,OpenXAI提供自动化端到端流水线,不仅简化并标准化了事后解释方法的评估,还提升了基准测试的透明度与可复现性。尽管OpenXAI首版仅支持表格数据集,但我们所考虑的解释方法与度量足够通用,可适用于其他数据模态。OpenXAI数据集、模型、最先进解释方法及评估度量的实现均已通过此GitHub链接公开提供。