Current deep learning models are not designed to simultaneously address three fundamental questions: predict class labels to solve a given classification task (the "What?"), simulate changes in the situation to evaluate how this impacts class predictions (the "How?"), and imagine how the scenario should change to result in different class predictions (the "Why not?"). The inability to answer these questions represents a crucial gap in deploying reliable AI agents, calibrating human trust, and improving human-machine interaction. To bridge this gap, we introduce CounterFactual Concept Bottleneck Models (CF-CBMs), a class of models designed to efficiently address the above queries all at once without the need to run post-hoc searches. Our experimental results demonstrate that CF-CBMs: achieve classification accuracy comparable to black-box models and existing CBMs ("What?"), rely on fewer important concepts leading to simpler explanations ("How?"), and produce interpretable, concept-based counterfactuals ("Why not?"). Additionally, we show that training the counterfactual generator jointly with the CBM leads to two key improvements: (i) it alters the model's decision-making process, making the model rely on fewer important concepts (leading to simpler explanations), and (ii) it significantly increases the causal effect of concept interventions on class predictions, making the model more responsive to these changes.
翻译:当前的深度学习模型并非旨在同时解决三个基本问题:预测类别标签以完成给定分类任务("是什么?"),模拟情境变化以评估其对类别预测的影响("如何变化?"),以及设想场景应如何改变才能产生不同的类别预测("为何不?")。无法回答这些问题构成了部署可靠AI代理、校准人类信任及改善人机交互的关键障碍。为弥合这一差距,我们提出了反事实概念瓶颈模型(CF-CBMs),该类模型无需进行事后搜索即可高效同步处理上述所有查询。实验结果表明CF-CBMs具备以下特性:达到与黑盒模型及现有CBMs相当的分类准确率("是什么?"),依赖更少的重要概念从而生成更简洁的解释("如何变化?"),并能产生基于概念的可解释反事实("为何不?")。此外,我们证明将反事实生成器与CBM联合训练可带来两项关键改进:(i)改变模型的决策机制,使其依赖更少的重要概念(从而产生更简洁的解释);(ii)显著增强概念干预对类别预测的因果效应,使模型对这些变化更为敏感。