eXplainable Artificial Intelligence (XAI) has garnered significant attention for enhancing transparency and trust in machine learning models. However, the scopes of most existing explanation techniques focus either on offering a holistic view of the explainee model (global explanation) or on individual instances (local explanation), while the middle ground, i.e., cohort-based explanation, is less explored. Cohort explanations offer insights into the explainee's behavior on a specific group or cohort of instances, enabling a deeper understanding of model decisions within a defined context. In this paper, we discuss the unique challenges and opportunities associated with measuring cohort explanations, define their desired properties, and create a generalized framework for generating cohort explanations based on supervised clustering.
翻译:可解释人工智能(XAI)在提升机器学习模型的透明度和可信度方面受到了广泛关注。然而,现有的大多数解释技术主要聚焦于提供被解释模型的整体视图(全局解释)或针对单个实例的解释(局部解释),而介于两者之间的群体解释则较少被探索。群体解释能够揭示被解释模型在特定实例组或群体上的行为,从而在限定情境下更深入地理解模型的决策机制。本文讨论了评估群体解释所面临的独特挑战与机遇,定义了其应具备的理想特性,并提出了一种基于监督聚类的通用框架以生成群体解释。