Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing. To date, the interpretability of FLMs has primarily relied on the attention weights in their self-attention layers. However, these attention weights only provide word-level interpretations, failing to capture higher-level structures, and are therefore lacking in readability and intuitiveness. To address this challenge, we first provide a formal definition of conceptual interpretation and then propose a variational Bayesian framework, dubbed VAriational Language Concept (VALC), to go beyond word-level interpretations and provide concept-level interpretations. Our theoretical analysis shows that our VALC finds the optimal language concepts to interpret FLM predictions. Empirical results on several real-world datasets show that our method can successfully provide conceptual interpretation for FLMs.
翻译:基础语言模型(如BERT及其变体)在自然语言处理领域取得了显著成功。迄今为止,基础语言模型的可解释性主要依赖于其自注意力层中的注意力权重。然而,这些注意力权重仅能提供词级解释,无法捕捉更高层次的结构,因此在可读性和直观性方面存在不足。为应对这一挑战,我们首先对概念化解释进行了形式化定义,随后提出了一种基于变分贝叶斯的框架——变分语言概念(VALC),以超越词级解释并提供概念级解释。理论分析表明,我们的VALC方法能够找到解释基础语言模型预测的最优语言概念。在多个真实数据集上的实证结果表明,本方法能成功为基础语言模型提供概念化解释。