Large Language Models (LLMs) demonstrate an impressive capacity to recall a vast range of common factual knowledge information. However, unravelling the underlying reasoning of LLMs and explaining their internal mechanisms of exploiting this factual knowledge remain active areas of investigation. Our work analyzes the factual knowledge encoded in the latent representation of LLMs when prompted to assess the truthfulness of factual claims. We propose an end-to-end framework that jointly decodes the factual knowledge embedded in the latent space of LLMs from a vector space to a set of ground predicates and represents its evolution across the layers using a temporal knowledge graph. Our framework relies on the technique of activation patching which intervenes in the inference computation of a model by dynamically altering its latent representations. Consequently, we neither rely on external models nor training processes. We showcase our framework with local and global interpretability analyses using two claim verification datasets: FEVER and CLIMATE-FEVER. The local interpretability analysis exposes different latent errors from representation to multi-hop reasoning errors. On the other hand, the global analysis uncovered patterns in the underlying evolution of the model's factual knowledge (e.g., store-and-seek factual information). By enabling graph-based analyses of the latent representations, this work represents a step towards the mechanistic interpretability of LLMs.
翻译:大型语言模型(LLMs)展现出回忆海量常识性事实信息的惊人能力。然而,揭示LLMs的底层推理机制并解释其利用事实知识的内部机理仍是活跃的研究领域。本研究分析了LLMs在评估事实陈述真实性时其潜在表示中编码的事实知识。我们提出了一种端到端框架,该框架将LLMs潜在空间中嵌入的事实知识从向量空间联合解码为一组基础谓词,并利用时间知识图谱表示其跨层次的演化过程。该框架依赖于激活修补技术——通过动态改变模型的潜在表示来干预其推理计算。因此,我们既不依赖外部模型也不依赖训练过程。我们使用两个声明验证数据集FEVER和CLIMATE-FEVER进行局部和全局可解释性分析来展示该框架。局部可解释性分析揭示了从表示错误到多跳推理错误的不同潜在误差类型。另一方面,全局分析揭示了模型事实知识底层演化的模式(例如存储与检索事实信息)。通过启用基于图的潜在表示分析,本研究标志着向LLMs机制可解释性迈出了重要一步。