As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge -- human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks (FLASK, BigGenBench) and competitive results on RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development.
翻译:随着语言模型在关键工作流程中日益重要,评估其行为仍是一个根本性挑战——人工评估成本高昂且存在噪声,而自动化指标仅能提供粗糙且难以解释的信号。我们提出自然语言单元测试这一范式,将响应质量分解为明确且可测试的标准,并引入统一评分模型LMUnit,该模型结合了跨偏好、直接评分和自然语言推理的多目标训练。通过受控人工研究,我们证明该范式显著提高了标注者间一致性,并实现了更有效的大型语言模型开发流程。LMUnit在评估基准(FLASK、BigGenBench)上取得了最先进的性能,并在RewardBench上获得具有竞争力的结果。这些结果验证了我们提出的范式与评分模型,为语言模型评估与开发指明了一条前景广阔的道路。