As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge -- human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks (FLASK, BigGenBench) and competitive results on RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development.
翻译:随着语言模型在关键工作流程中日益重要,评估其行为仍是一项根本性挑战——人工评估成本高昂且噪声大,而自动化指标仅能提供粗略且难以解释的信号。我们提出了自然语言单元测试这一范式,它将响应质量分解为明确、可测试的标准,并引入统一评分模型 LMUnit,该模型结合了跨偏好、直接评分和自然语言推理的多目标训练。通过受控的人工研究,我们证明该范式显著提高了标注者间一致性,并实现了更有效的大语言模型开发流程。LMUnit 在评估基准(FLASK、BigGenBench)上取得了最先进的性能,并在 RewardBench 上获得了具有竞争力的结果。这些结果验证了我们提出的范式及评分模型的有效性,为语言模型的评估与开发指明了一条前景广阔的路径。