Large Language Models such as GPTs (Generative Pre-trained Transformers) exhibit remarkable capabilities across a broad spectrum of applications. Nevertheless, due to their intrinsic complexity, these models present substantial challenges in interpreting their internal decision-making processes. This lack of transparency poses critical challenges when it comes to their adaptation by financial institutions, where concerns and accountability regarding bias, fairness, and reliability are of paramount importance. Mechanistic interpretability aims at reverse engineering complex AI models such as transformers. In this paper, we are pioneering the use of mechanistic interpretability to shed some light on the inner workings of large language models for use in financial services applications. We offer several examples of how algorithmic tasks can be designed for compliance monitoring purposes. In particular, we investigate GPT-2 Small's attention pattern when prompted to identify potential violation of Fair Lending laws. Using direct logit attribution, we study the contributions of each layer and its corresponding attention heads to the logit difference in the residual stream. Finally, we design clean and corrupted prompts and use activation patching as a causal intervention method to localize our task completion components further. We observe that the (positive) heads $10.2$ (head $2$, layer $10$), $10.7$, and $11.3$, as well as the (negative) heads $9.6$ and $10.6$ play a significant role in the task completion.
翻译:诸如GPT(生成式预训练Transformer)之类的大型语言模型在广泛的应用领域中展现出卓越的能力。然而,由于其内在复杂性,这些模型在解释其内部决策过程方面存在重大挑战。这种透明度的缺乏在金融机构采用这些模型时构成了关键挑战,因为关于偏见、公平性和可靠性的关切与问责至关重要。机制可解释性旨在对Transformer等复杂人工智能模型进行逆向工程。本文率先运用机制可解释性方法来揭示大型语言模型在金融服务应用中的内部工作机制。我们提供了若干示例,说明如何为合规监控目的设计算法任务。具体而言,我们研究了GPT-2 Small在提示识别潜在违反《公平借贷法》行为时的注意力模式。通过直接对数几率归因方法,我们分析了各层及其对应注意力头对残差流中对数几率差异的贡献。最后,我们设计了干净与污染提示,并采用激活修补作为因果干预方法,以进一步定位任务完成组件。我们观察到(正向)注意力头$10.2$(第10层第2头)、$10.7$和$11.3$,以及(负向)注意力头$9.6$和$10.6$在任务完成中发挥重要作用。