Large Language Models (LLMs) are increasingly utilized for domain-specific tasks, yet integrating domain expertise into evaluating their outputs remains challenging. A common approach to evaluating LLMs is to use metrics, or criteria, which are assertions used to assess performance that help ensure that their outputs align with domain-specific standards. Previous efforts have involved developers, lay users, or the LLMs themselves in creating these criteria, however, evaluation particularly from a domain expertise perspective, remains understudied. This study explores how domain experts contribute to LLM evaluation by comparing their criteria with those generated by LLMs and lay users. We further investigate how the criteria-setting process evolves, analyzing changes between a priori and a posteriori stages. Our findings emphasize the importance of involving domain experts early in the evaluation process while utilizing complementary strengths of lay users and LLMs. We suggest implications for designing workflows that leverage these strengths at different evaluation stages.
翻译:大型语言模型(LLMs)正日益应用于特定领域任务,但将领域专业知识融入其输出评估仍具挑战性。评估LLMs的常见方法是使用度量标准或评估准则,即用于评估性能的断言性指标,以确保输出符合领域特定标准。先前研究涉及开发者、普通用户或LLMs自身参与制定这些准则,但从领域专业知识视角进行的评估研究仍显不足。本研究通过比较领域专家、LLMs和普通用户生成的评估准则,探讨领域专家如何促进LLM评估。我们进一步考察准则制定过程的动态演变,分析先验阶段与后验阶段之间的变化。研究结果强调在评估早期引入领域专家的重要性,同时应善用普通用户和LLMs的互补优势。本文为设计在不同评估阶段发挥各方优势的工作流程提供了实践启示。