Large language models (LLMs) are increasingly deployed as automatic judges to evaluate system outputs in tasks such as summarization, dialogue, and creative writing. A faithful judge should base its verdicts solely on response quality and explicitly acknowledge the factors shaping its decision. We show that current LLM judges fail on both counts by relying on shortcuts introduced in the prompt. Our study uses two evaluation datasets: ELI5, a benchmark for long-form question answering, and LitBench, a recent benchmark for creative writing. Both datasets provide pairwise comparisons, where the evaluator must choose which of two responses is better. From each dataset we construct 100 pairwise judgment tasks and employ two widely used models, GPT-4o and Gemini-2.5-Flash, as evaluators in the role of LLM-as-a-judge. For each pair, we assign superficial cues to the responses, provenance cues indicating source identity (Human, Expert, LLM, or Unknown) and recency cues indicating temporal origin (Old, 1950 vs. New, 2025), while keeping the rest of the prompt fixed. Results reveal consistent verdict shifts: both models exhibit a strong recency bias, systematically favoring new responses over old, as well as a clear provenance hierarchy (Expert > Human > LLM > Unknown). These biases are especially pronounced in GPT-4o and in the more subjective and open-ended LitBench domain. Crucially, cue acknowledgment is rare: justifications almost never reference the injected cues, instead rationalizing decisions in terms of content qualities. These findings demonstrate that current LLM-as-a-judge systems are shortcut-prone and unfaithful, undermining their reliability as evaluators in both research and deployment.
翻译:大型语言模型正越来越多地被部署为自动裁判,用于评估摘要、对话和创意写作等任务的系统输出。一个忠实的裁判应仅基于响应质量做出裁决,并明确承认影响其决策的因素。我们的研究表明,当前的LLM裁判在这两方面均存在不足,它们依赖于提示中引入的捷径。本研究使用两个评估数据集:用于长文本问答的基准ELI5,以及用于创意写作的最新基准LitBench。两个数据集均提供成对比较,要求评估者必须从两个响应中选择更优者。我们从每个数据集中构建了100组成对判断任务,并采用两个广泛使用的模型——GPT-4o和Gemini-2.5-Flash——作为LLM裁判角色进行评估。对于每一对响应,我们在保持提示其余部分不变的前提下,为响应分配表面线索:表明来源身份(人类、专家、LLM或未知)的出处线索,以及表明时间来源(旧,1950年 vs. 新,2025年)的新近性线索。结果显示了一致的裁决偏移:两个模型均表现出强烈的新近性偏见,系统性地偏爱新响应而非旧响应,同时存在清晰的出处等级偏好(专家 > 人类 > LLM > 未知)。这些偏见在GPT-4o中以及在更具主观性和开放性的LitBench领域中尤为明显。关键的是,线索承认极为罕见:模型在论证决策理由时几乎从不提及注入的线索,而是依据内容质量进行合理化解释。这些发现表明,当前作为裁判的LLM系统容易依赖捷径且缺乏忠实性,这削弱了其在研究和部署中作为评估工具的可靠性。