Large Language Models such as GPTs (Generative Pre-trained Transformers) exhibit remarkable capabilities across a broad spectrum of applications. Nevertheless, due to their intrinsic complexity, these models present substantial challenges in interpreting their internal decision-making processes. This lack of transparency poses critical challenges when it comes to their adaptation by financial institutions, where concerns and accountability regarding bias, fairness, and reliability are of paramount importance. Mechanistic interpretability aims at reverse engineering complex AI models such as transformers. In this paper, we are pioneering the use of mechanistic interpretability to shed some light on the inner workings of large language models for use in financial services applications. We offer several examples of how algorithmic tasks can be designed for compliance monitoring purposes. In particular, we investigate GPT-2 Small's attention pattern when prompted to identify potential violation of Fair Lending laws. Using direct logit attribution, we study the contributions of each layer and its corresponding attention heads to the logit difference in the residual stream. Finally, we design clean and corrupted prompts and use activation patching as a causal intervention method to localize our task completion components further. We observe that the (positive) heads $10.2$ (head $2$, layer $10$), $10.7$, and $11.3$, as well as the (negative) heads $9.6$ and $10.6$ play a significant role in the task completion.
翻译:以GPT(生成式预训练Transformer)为代表的大型语言模型在广泛的应用领域中展现出卓越能力。然而,由于其内在复杂性,这些模型的内部决策过程解释面临重大挑战。在金融机构应用场景中,这种透明度的缺失带来了关键性挑战——该领域对模型偏见、公平性与可靠性的监管要求及问责机制具有极高重要性。机制可解释性致力于对Transformer等复杂人工智能模型进行逆向工程解析。本文开创性地运用机制可解释性方法,揭示大型语言模型在金融服务应用中的内部工作机制。我们提供了若干为合规监控目的设计算法任务的范例,特别研究了GPT-2 Small模型在识别潜在违反《公平借贷法案》行为时的注意力模式。通过直接对数优势归因法,我们分析了各层级及其对应注意力头对残差流中对数优势差异的贡献度。最后,我们设计纯净与污染提示语,并采用激活修补作为因果干预方法,进一步定位任务完成组件。研究发现:(正向)注意力头10.2(第10层第2头)、10.7与11.3,以及(负向)注意力头9.6与10.6在任务完成过程中发挥重要作用。