Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness. Unlike previous approaches that model the concept relations via an autoregressive structure, we introduce an explicit, distributional parameterization that allows SCBMs to retain the CBMs' efficient training and inference procedure. Additionally, we leverage the parameterization to derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.
翻译:概念瓶颈模型(CBMs)作为一种具有前景的可解释方法,其最终预测基于中间的人类可理解概念而非原始输入。通过耗时的人工干预,用户可以修正错误预测的概念值以提升模型的下游性能。本文提出随机概念瓶颈模型(SCBMs),这是一种对概念依赖性进行建模的新方法。在SCBMs中,对单个概念的干预会影响所有相关概念,从而提升干预效率。与先前通过自回归结构建模概念关系的方法不同,我们引入了显式的分布参数化方法,使SCBMs能够保持CBMs的高效训练与推理流程。此外,我们利用该参数化方法推导出基于置信区域的有效干预策略。我们在合成表格数据和自然图像数据集上的实验表明,该方法能显著提升干预效率。值得注意的是,我们通过使用CLIP推断概念的实验场景,展示了SCBMs的通用性与实用性,这降低了对人工概念标注的需求。