Large language model (LLM)-based judges are widely adopted for automated evaluation and reward modeling, yet their judgments are often affected by judgment biases. Accurately evaluating these biases is essential for ensuring the reliability of LLM-based judges. However, existing studies typically investigate limited biases under a single judge formulation, either generative or discriminative, lacking a comprehensive evaluation. To bridge this gap, we propose JudgeBiasBench, a benchmark for systematically quantifying biases in LLM-based judges. JudgeBiasBench defines a taxonomy of judgment biases across 4 dimensions, and constructs bias-augmented evaluation instances through a controlled bias injection pipeline, covering 12 representative bias types. We conduct extensive experiments across both generative and discriminative judges, revealing that current judges exhibit significant and diverse bias patterns that often compromise the reliability of automated evaluation. To mitigate judgment bias, we propose bias-aware training that explicitly incorporates bias-related attributes into the training process, encouraging judges to disentangle task-relevant quality from bias-correlated cues. By adopting reinforcement learning for generative judges and contrastive learning for discriminative judges, our methods effectively reduce judgment biases while largely preserving general evaluation capability.
翻译:基于大语言模型(LLM)的评估器被广泛应用于自动评估与奖励建模,但其判断常受评估偏差影响。准确评估这些偏差对于确保基于LLM的评估器的可靠性至关重要。然而,现有研究通常仅在单一评估范式(生成式或判别式)下考察有限的偏差类型,缺乏系统性评估。为弥补这一空白,我们提出了JudgeBiasBench——一个用于系统量化基于LLM的评估器偏差的基准。JudgeBiasBench定义了涵盖4个维度的评估偏差分类体系,并通过可控的偏差注入流程构建了偏差增强的评估实例,覆盖12种代表性偏差类型。我们在生成式与判别式评估器上进行了广泛实验,发现当前评估器存在显著且多样化的偏差模式,这些偏差往往会损害自动评估的可靠性。为缓解评估偏差,我们提出了偏差感知训练方法,将偏差相关属性显式纳入训练过程,促使评估器从偏差相关线索中解耦出任务相关的质量信息。通过采用强化学习优化生成式评估器、对比学习优化判别式评估器,我们的方法在有效降低评估偏差的同时,较好地保持了通用评估能力。