Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators simply inherit all the problems of the LLMs they evaluate, requiring further human validation. We present a mixed-initiative approach to ``validate the validators'' -- aligning LLM-generated evaluation functions (be it prompts or code) with human requirements. Our interface, EvalGen, provides automated assistance to users in generating evaluation criteria and implementing assertions. While generating candidate implementations (Python functions, LLM grader prompts), EvalGen asks humans to grade a subset of LLM outputs; this feedback is used to select implementations that better align with user grades. A qualitative study finds overall support for EvalGen but underscores the subjectivity and iterative process of alignment. In particular, we identify a phenomenon we dub \emph{criteria drift}: users need criteria to grade outputs, but grading outputs helps users define criteria. What is more, some criteria appears \emph{dependent} on the specific LLM outputs observed (rather than independent criteria that can be defined \emph{a priori}), raising serious questions for approaches that assume the independence of evaluation from observation of model outputs. We present our interface and implementation details, a comparison of our algorithm with a baseline approach, and implications for the design of future LLM evaluation assistants.
翻译:由于人工评估的繁琐性和基于代码评估的局限性,大语言模型(LLM)正越来越多地被用于协助人类评估LLM输出结果。然而,LLM生成的评估器只是继承了被评估LLM的所有问题,仍需进一步的人工验证。我们提出了一种混合主动式方法来"为验证者把关"——将LLM生成的评估函数(无论是提示词还是代码)与人类需求对齐。我们的界面EvalGen为用户提供自动辅助,用于生成评估标准并实现断言。在生成候选实现方案(Python函数、LLM评分提示词)的同时,EvalGen要求用户对部分LLM输出进行评分;这些反馈被用于选择与用户评分更对齐的实现方案。定性研究表明用户总体支持EvalGen,但强调了评判标准的主观性和迭代对齐过程。特别地,我们识别出一种称为"标准漂移"的现象:用户需要标准来对输出评分,但评分过程反过来帮助用户定义标准。更值得注意的是,某些标准似乎依赖于所观察到的具体LLM输出(而非可事先定义的独立标准),这对那些假设评估独立于模型输出观察的方法提出了严峻挑战。我们介绍了界面设计与实现细节、算法与基线方法的对比,以及对未来LLM评估辅助工具设计的启示。