We introduce the Concept Bottleneck Large Language Model (CB-LLM), a pioneering approach to creating inherently interpretable Large Language Models (LLMs). Unlike traditional black-box LLMs that rely on post-hoc interpretation methods with limited neuron function insights, CB-LLM sets a new standard with its built-in interpretability, scalability, and ability to provide clear, accurate explanations. We investigate two essential tasks in the NLP domain: text classification and text generation. In text classification, CB-LLM narrows the performance gap with traditional black-box models and provides clear interpretability. In text generation, we show how interpretable neurons in CB-LLM can be used for concept detection and steering text generation. Our CB-LLMs enable greater interaction between humans and LLMs across a variety of tasks -- a feature notably absent in existing LLMs. Our code is available at https://github.com/Trustworthy-ML-Lab/CB-LLMs.
翻译:我们提出了概念瓶颈大语言模型(CB-LLM),这是一种开创性的方法,用于构建本质可解释的大语言模型(LLMs)。与依赖事后解释方法且对神经元功能洞察有限的黑盒传统LLMs不同,CB-LLM以其内置的可解释性、可扩展性以及提供清晰、准确解释的能力,树立了新的标准。我们研究了自然语言处理领域的两个基本任务:文本分类和文本生成。在文本分类中,CB-LLM缩小了与传统黑盒模型的性能差距,并提供了清晰的可解释性。在文本生成中,我们展示了CB-LLM中的可解释神经元如何用于概念检测和引导文本生成。我们的CB-LLM能够在多种任务中实现人与LLM之间更深入的交互——这一特性在现有LLMs中明显缺失。我们的代码可在 https://github.com/Trustworthy-ML-Lab/CB-LLMs 获取。