With Large Language Models (LLMs) being widely used across various tasks, detecting errors in their responses is increasingly crucial. However, little research has been conducted on error detection of LLM responses. Collecting error annotations on LLM responses is challenging due to the subjective nature of many NLP tasks, and thus previous research focuses on tasks of little practical value (e.g., word sorting) or limited error types (e.g., faithfulness in summarization). This work introduces ReaLMistake, the first error detection benchmark consisting of objective, realistic, and diverse errors made by LLMs. ReaLMistake contains three challenging and meaningful tasks that introduce objectively assessable errors in four categories (reasoning correctness, instruction-following, context-faithfulness, and parameterized knowledge), eliciting naturally observed and diverse errors in responses of GPT-4 and Llama 2 70B annotated by experts. We use ReaLMistake to evaluate error detectors based on 12 LLMs. Our findings show: 1) Top LLMs like GPT-4 and Claude 3 detect errors made by LLMs at very low recall, and all LLM-based error detectors perform much worse than humans. 2) Explanations by LLM-based error detectors lack reliability. 3) LLMs-based error detection is sensitive to small changes in prompts but remains challenging to improve. 4) Popular approaches to improving LLMs, including self-consistency and majority vote, do not improve the error detection performance. Our benchmark and code are provided at https://github.com/psunlpgroup/ReaLMistake.
翻译:随着大型语言模型(LLMs)在各种任务中的广泛应用,检测其响应中的错误变得日益关键。然而,目前针对LLM响应错误检测的研究尚不充分。由于许多自然语言处理任务具有主观性,收集LLM响应的错误标注具有挑战性,因此先前研究主要集中于实用价值较低的任务(如单词排序)或有限错误类型(如摘要的忠实度)。本研究提出了ReaLMistake——首个由LLMs产生的客观、真实且多样化的错误构成的检测基准。ReaLMistake包含三项具有挑战性且意义重大的任务,涵盖四类可客观评估的错误类型(推理正确性、指令遵循性、上下文忠实性和参数化知识),通过专家标注收集了GPT-4和Llama 2 70B响应中自然出现的多样化错误。我们使用ReaLMistake评估了基于12种LLMs的错误检测器。研究发现:1)GPT-4和Claude 3等顶级LLMs对同类错误的检测召回率极低,所有基于LLM的错误检测器性能均远低于人类水平;2)基于LLM的错误检测器提供的解释缺乏可靠性;3)基于LLM的错误检测对提示词的微小变化敏感,但性能提升仍面临挑战;4)改进LLMs的常用方法(包括自洽性和多数投票)并未提升错误检测性能。我们的基准数据和代码已发布于https://github.com/psunlpgroup/ReaLMistake。