Large Language Models (LLMs), such as the GPT-4 and LLaMA families, have demonstrated considerable success across diverse tasks, including multiple-choice questions (MCQs). However, these models exhibit a positional bias, particularly an even worse anchored bias in the GPT-2 family, where they consistently favour the first choice 'A' in MCQs during inference. This anchored bias challenges the integrity of GPT-2's decision-making process, as it skews performance based on the position rather than the content of the choices in MCQs. In this study, we utilise the mechanistic interpretability approach to identify the internal modules within GPT-2 models responsible for this bias. We focus on the Multi-Layer Perceptron (MLP) layers and attention heads, using the "logit lens" method to trace and modify the specific value vectors that contribute to the bias. By updating these vectors within MLP and recalibrating attention patterns to neutralise the preference for the first choice 'A', we effectively mitigate the anchored bias. Our interventions not only correct the bias but also improve the overall MCQ prediction accuracy for the GPT-2 family across various datasets. This work represents the first comprehensive mechanistic analysis of anchored bias in MCQs within the GPT-2 models, introducing targeted, minimal-intervention strategies that significantly enhance GPT2 model robustness and accuracy in MCQs. Our code is available at https://github.com/ruizheliUOA/Anchored_Bias_GPT2.
翻译:大语言模型(LLMs),如GPT-4和LLaMA系列,在包括多项选择题(MCQs)在内的多种任务中展现了显著成功。然而,这些模型存在位置偏差,尤其是GPT-2系列中更为严重的锚定偏差,即在推理过程中,模型始终偏向于选择MCQ中的第一个选项“A”。这种锚定偏差损害了GPT-2决策过程的完整性,因为它基于选项位置而非内容来扭曲性能。本研究采用机制可解释性方法,识别GPT-2模型内部导致该偏差的模块。我们聚焦于多层感知机(MLP)层和注意力头,利用"logit lens"方法追踪并修改导致偏差的特定值向量。通过更新MLP中的这些向量并重新校准注意力模式以消除对首个选项"A"的偏好,我们有效缓解了锚定偏差。我们的干预措施不仅纠正了偏差,还提升了GPT-2系列在多个数据集上的整体MCQ预测准确性。本研究首次对GPT-2模型中MCQ的锚定偏差进行全面机制分析,提出针对性、最小干预策略,显著增强了GPT-2模型在MCQ中的鲁棒性和准确性。我们的代码已开源:https://github.com/ruizheliUOA/Anchored_Bias_GPT2