Human preference evaluations are widely used to compare generative models, yet it remains unclear how many judgments are required to reliably detect small improvements. We show that when preference signal is diffuse across prompts (i.e., all prompt types are similarly informative), proportional allocation is minimax-optimal: no allocation strategy substantially improves detectability. Empirical analysis of large-scale human preference datasets shows that most comparisons fall into this diffuse regime, exhibiting small preference margins that require far more judgments than typically collected, even in well-sampled comparisons. These limits persist across evaluation protocols and modalities, including chat, image generation, and code generation with execution feedback. In contrast, curated benchmarks that reduce prompt induced variability systematically induce larger margins and improve detectability through a $1.5\times$ reduction in prompt-level variance. Our results show that inconclusive or negative human evaluation outcomes frequently reflect underpowered evaluation rather than model equivalence, underscoring the need to account explicitly for effect size, budget, and protocol design.
翻译:人类偏好评估被广泛用于比较生成模型,然而可靠检测微小改进所需的判断数量仍不明确。本文证明,当偏好信号在提示间呈弥散分布时(即所有提示类型具有相似的信息量),比例分配是最小化最大风险最优的:没有任何分配策略能显著提升可检测性。对大规模人类偏好数据集的实证分析表明,大多数比较都处于这种弥散状态,显示出微小的偏好边际,这需要比通常收集数量更多的判断——即使在充分采样的比较中也是如此。这些限制在不同评估协议和模态中持续存在,包括聊天对话、图像生成以及带有执行反馈的代码生成。相比之下,通过降低提示诱导变异性的精选基准系统性地产生了更大的偏好边际,并通过将提示级方差降低 $1.5\times$ 来提升可检测性。我们的研究结果表明,非结论性或负面的人类评估结果往往反映了评估效力不足而非模型等效性,这凸显了明确考虑效应大小、预算和协议设计的必要性。