Rubrics are essential for evaluating open-ended LLM responses, especially in safety-critical domains such as healthcare. However, creating high-quality and domain-specific rubrics typically requires significant human expertise time and development cost, making rubric-based evaluation and training difficult to scale. In this work, we introduce Health-SCORE, a generalizable and scalable rubric-based training and evaluation framework that substantially reduces rubric development costs without sacrificing performance. We show that Health-SCORE provides two practical benefits beyond standalone evaluation: it can be used as a structured reward signal to guide reinforcement learning with safety-aware supervision, and it can be incorporated directly into prompts to improve response quality through in-context learning. Across open-ended healthcare tasks, Health-SCORE achieves evaluation quality comparable to human-created rubrics while significantly lowering development effort, making rubric-based evaluation and training more scalable.
翻译:量规对于评估开放式大语言模型(LLM)的响应至关重要,尤其是在医疗保健等安全关键领域。然而,创建高质量且领域特定的量规通常需要大量的人类专家时间和开发成本,使得基于量规的评估和训练难以规模化。在本工作中,我们提出了Health-SCORE,一个通用且可扩展的基于量规的训练与评估框架,该框架在不牺牲性能的前提下,显著降低了量规的开发成本。我们证明,Health-SCORE除了独立评估功能外,还提供两个实际优势:它可用作结构化奖励信号,以指导具有安全感知监督的强化学习;并且可以直接整合到提示中,通过上下文学习来提高响应质量。在开放式医疗保健任务中,Health-SCORE实现了与人工创建量规相当的评估质量,同时显著降低了开发工作量,使得基于量规的评估和训练更具可扩展性。