Concept Bottleneck Models (CBMs) are regarded as inherently interpretable because they first predict a set of human-defined concepts which are used to predict a task label. For inherent interpretability to be fully realised, and ensure trust in a model's output, it's desirable for concept predictions to use semantically meaningful input features. For instance, in an image, pixels representing a broken bone should contribute to predicting a fracture. However, current literature suggests that concept predictions often rely on irrelevant input features. We hypothesise that this occurs when dataset labels include inaccurate concept annotations, or the relationship between input features and concepts is unclear. In general, the effect of dataset labelling on concept representations remains an understudied area. In this paper, we demonstrate that CBMs can learn to map concepts to semantically meaningful input features, by utilising datasets with a clear link between the input features and the desired concept predictions. This is achieved, for instance, by ensuring multiple concepts do not always co-occur and, therefore provide a clear training signal for the CBM to distinguish the relevant input features for each concept. We validate our hypothesis on both synthetic and real-world image datasets, and demonstrate under the correct conditions, CBMs can learn to attribute semantically meaningful input features to the correct concept predictions.
翻译:概念瓶颈模型因其先预测一组人工定义的概念,再基于这些概念预测任务标签,而被视为具有内在可解释性。为实现完全的内在可解释性并确保对模型输出的信任,理想情况是概念预测应利用具有语义意义的输入特征。例如,在图像中,代表骨折的像素应对预测骨折概念有所贡献。然而,现有文献表明,概念预测常常依赖于不相关的输入特征。我们假设,当数据集标签包含不准确的概念标注,或输入特征与概念之间的关系不明确时,就会出现这种情况。总体而言,数据集标注对概念表示的影响仍是一个研究不足的领域。在本文中,我们证明,通过利用输入特征与期望概念预测之间存在清晰关联的数据集,概念瓶颈模型能够学习将概念映射到具有语义意义的输入特征。例如,这可以通过确保多个概念不总是同时出现来实现,从而为概念瓶颈模型区分每个概念的相关输入特征提供清晰的训练信号。我们在合成和真实世界图像数据集上验证了我们的假设,并证明在正确的条件下,概念瓶颈模型能够学习将具有语义意义的输入特征归因于正确的概念预测。