Concept Bottleneck Models (CBMs) have been proposed as a compromise between white-box and black-box models, aiming to achieve interpretability without sacrificing accuracy. The standard training procedure for CBMs is to predefine a candidate set of human-interpretable concepts, extract their values from the training data, and identify a sparse subset as inputs to a transparent prediction model. However, such approaches are often hampered by the tradeoff between enumerating a sufficiently large set of concepts to include those that are truly relevant versus controlling the cost of obtaining concept extractions. This work investigates a novel approach that sidesteps these challenges: BC-LLM iteratively searches over a potentially infinite set of concepts within a Bayesian framework, in which Large Language Models (LLMs) serve as both a concept extraction mechanism and prior. BC-LLM is broadly applicable and multi-modal. Despite imperfections in LLMs, we prove that BC-LLM can provide rigorous statistical inference and uncertainty quantification. In experiments, it outperforms comparator methods including black-box models, converges more rapidly towards relevant concepts and away from spuriously correlated ones, and is more robust to out-of-distribution samples.
翻译:概念瓶颈模型(CBMs)被提出作为白盒模型与黑盒模型之间的折衷方案,旨在不牺牲准确性的前提下实现可解释性。标准的概念瓶颈模型训练流程是:预定义一组候选的人类可解释概念,从训练数据中提取这些概念的值,并筛选出一个稀疏子集作为透明预测模型的输入。然而,此类方法常面临一个权衡困境:一方面需要枚举足够大的概念集以包含真正相关的概念,另一方面又需控制概念提取的成本。本研究探索了一种规避这些挑战的新方法:BC-LLM 在贝叶斯框架内迭代搜索潜在无限的概念空间,其中大语言模型(LLMs)同时充当概念提取机制和先验知识来源。BC-LLM 具有广泛的适用性和多模态特性。尽管大语言模型存在不完美之处,我们证明 BC-LLM 仍能提供严格的统计推断和不确定性量化。在实验中,该方法优于包括黑盒模型在内的对比方法,能更快地收敛至相关概念并远离伪相关概念,且对分布外样本具有更强的鲁棒性。