Large language models (LLMs) have demonstrated impressive capabilities across a wide range of natural language processing tasks. However, their outputs often exhibit social biases, raising fairness concerns. Existing debiasing methods, such as fine-tuning on additional datasets or prompt engineering, face scalability issues or compromise user experience in multi-turn interactions. To address these challenges, we propose a framework for detecting stereotype-inducing words and attributing neuron-level bias in LLMs, without the need for fine-tuning or prompt modification. Our framework first identifies stereotype-inducing adjectives and nouns via comparative analysis across demographic groups. We then attribute biased behavior to specific neurons using two attribution strategies based on integrated gradients. Finally, we mitigate bias by directly intervening on their activations at the projection layer. Experiments on three widely used LLMs demonstrate that our method effectively reduces bias while preserving overall model performance. Code is available at the github link: https://github.com/XMUDeepLIT/Bi-directional-Bias-Attribution.
翻译:大型语言模型(LLMs)在广泛的自然语言处理任务中展现出卓越能力,但其输出常表现出社会偏见,引发公平性担忧。现有去偏见方法(如基于额外数据集的微调或提示工程)面临可扩展性问题,或在多轮交互中影响用户体验。为解决这些挑战,我们提出了一种无需微调或修改提示即可检测LLMs中刻板印象诱导词并进行神经元级偏差归因的框架。该框架首先通过跨人口群体的比较分析识别刻板印象诱导性形容词与名词,随后基于积分梯度的两种归因策略将偏差行为溯源至特定神经元,最终通过在投影层直接干预其激活值来缓解偏差。在三个广泛使用的LLMs上的实验表明,本方法在保持模型整体性能的同时有效降低了偏见。代码发布于GitHub链接:https://github.com/XMUDeepLIT/Bi-directional-Bias-Attribution。