In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g.\ distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.
翻译:近年来,可解释人工智能(XAI)方法极大地促进了机器学习模型的深入验证与知识提取。尽管针对分类任务的XAI研究已相当广泛,但专门应对回归模型特有挑战的XAI解决方案仍较为缺乏。在回归任务中,解释需要精确构建以回应用户的具体查询(例如区分“为何输出大于0?”与“为何输出大于50?”)。此外,解释还应反映模型在相关数据子流形上的行为。本文提出XpertAI框架,该框架将预测策略解耦为多个范围特定的子策略,并允许将关于模型(即“待解释对象”)的精确查询表述为这些子策略的线性组合。XpertAI采用通用化设计,可与基于遮挡、梯度积分或反向传播的主流XAI归因技术协同工作。定性与定量实验结果均验证了本方法的优势。