The advent of transformer-based language models has reshaped how AI systems process and generate text. In software engineering (SE), these models now support diverse activities, accelerating automation and decision-making. Yet, evidence shows that these models can reproduce or amplify social biases, raising fairness concerns. Recent work on neuron editing has shown that internal activations in pre-trained transformers can be traced and modified to alter model behavior. Building on the concept of knowledge neurons, neurons that encode factual information, we hypothesize the existence of biased neurons that capture stereotypical associations within pre-trained transformers. To test this hypothesis, we build a dataset of biased relations, i.e., triplets encoding stereotypes across nine bias types, and adapt neuron attribution strategies to trace and suppress biased neurons in BERT models. We then assess the impact of suppression on SE tasks. Our findings show that biased knowledge is localized within small neuron subsets, and suppressing them substantially reduces bias with minimal performance loss. This demonstrates that bias in transformers can be traced and mitigated at the neuron level, offering an interpretable approach to fairness in SE.
翻译:基于Transformer的语言模型的出现重塑了AI系统处理和生成文本的方式。在软件工程(SE)领域,这些模型现已支持多样化的活动,加速了自动化与决策过程。然而,有证据表明,这些模型可能复制或放大社会偏见,引发了公平性担忧。近期关于神经元编辑的研究表明,预训练Transformer中的内部激活可以被追踪和修改,从而改变模型行为。基于知识神经元(即编码事实信息的神经元)这一概念,我们假设在预训练Transformer中存在捕获刻板印象关联的偏见神经元。为验证这一假设,我们构建了一个偏见关系数据集,即编码九种偏见类型中刻板印象的三元组,并采用神经元归因策略来追踪和抑制BERT模型中的偏见神经元。随后,我们评估了抑制操作对SE任务的影响。研究结果表明,偏见知识集中于少量神经元子集中,抑制这些神经元可在性能损失最小的情况下显著降低偏见。这证明Transformer中的偏见可以在神经元层面被追踪和缓解,为SE中的公平性提供了一种可解释的途径。