Large language models (LLMs) have demonstrated impressive capabilities in various tasks using the in-context learning (ICL) paradigm. However, their effectiveness is often compromised by inherent bias, leading to prompt brittleness, i.e., sensitivity to design settings such as example selection, order, and prompt formatting. Previous studies have addressed LLM bias through external adjustment of model outputs, but the internal mechanisms that lead to such bias remain unexplored. Our work delves into these mechanisms, particularly investigating how feedforward neural networks (FFNs) and attention heads result in the bias of LLMs. By Interpreting the contribution of individual FFN vectors and attention heads, we identify the biased LLM components that skew LLMs' prediction toward specific labels. To mitigate these biases, we introduce UniBias, an inference-only method that effectively identifies and eliminates biased FFN vectors and attention heads. Extensive experiments across 12 NLP datasets demonstrate that UniBias significantly enhances ICL performance and alleviates prompt brittleness of LLMs.
翻译:大语言模型(LLMs)在基于上下文学习(ICL)范式的各类任务中展现了卓越能力。然而,其性能常受固有偏见影响,导致提示脆弱性——即对示例选择、顺序及提示格式等设计设置的高度敏感性。先前研究主要通过外部调整模型输出来应对LLM偏见,但导致此类偏见的内部机制尚未得到探索。本研究深入剖析这些机制,重点探究前馈神经网络(FFNs)与注意力头如何引发LLM偏见。通过解译单个FFN向量与注意力头的贡献度,我们识别出导致模型预测偏向特定标签的偏见组件。为缓解这些偏见,我们提出UniBias——一种纯推理方法,能有效识别并消除带偏见的FFN向量与注意力头。在12个NLP数据集上的大量实验表明,UniBias显著提升了ICL性能并有效缓解了LLM的提示脆弱性。