Natural language processing (NLP) has seen remarkable advancements with the development of large language models (LLMs). Despite these advancements, LLMs often produce socially biased outputs. Recent studies have mainly addressed this problem by prompting LLMs to behave ethically, but this approach results in unacceptable performance degradation. In this paper, we propose a multi-objective approach within a multi-agent framework (MOMA) to mitigate social bias in LLMs without significantly compromising their performance. The key idea of MOMA involves deploying multiple agents to perform causal interventions on bias-related contents of the input questions, breaking the shortcut connection between these contents and the corresponding answers. Unlike traditional debiasing techniques leading to performance degradation, MOMA substantially reduces bias while maintaining accuracy in downstream tasks. Our experiments conducted on two datasets and two models demonstrate that MOMA reduces bias scores by up to 87.7%, with only a marginal performance degradation of up to 6.8% in the BBQ dataset. Additionally, it significantly enhances the multi-objective metric icat in the StereoSet dataset by up to 58.1%. Code will be made available at https://github.com/Cortantse/MOMA.
翻译:随着大型语言模型(LLMs)的发展,自然语言处理(NLP)领域取得了显著进步。尽管取得了这些进展,LLMs 仍经常产生具有社会偏见的输出。最近的研究主要通过提示 LLMs 使其行为符合伦理规范来解决此问题,但这种方法会导致不可接受的性能下降。本文提出了一种在多智能体框架(MOMA)下的多目标方法,旨在缓解 LLMs 中的社会偏见,同时不显著损害其性能。MOMA 的核心思想是部署多个智能体,对输入问题中与偏见相关的内容进行因果干预,从而切断这些内容与相应答案之间的捷径连接。与导致性能下降的传统去偏见技术不同,MOMA 在保持下游任务准确性的同时,显著降低了偏见。我们在两个数据集和两个模型上进行的实验表明,MOMA 将偏见分数降低了高达 87.7%,而在 BBQ 数据集上的性能下降仅为最高 6.8%。此外,它在 StereoSet 数据集上显著提升了多目标指标 icat,增幅高达 58.1%。代码将在 https://github.com/Cortantse/MOMA 提供。