Detecting evidence within the context is a key step in the process of reasoning task. Evaluating and enhancing the capabilities of LLMs in evidence detection will strengthen context-based reasoning performance. This paper proposes a benchmark called DetectBench for verifying the ability to detect and piece together implicit evidence within a long context. DetectBench contains 3,928 multiple-choice questions, with an average of 994 tokens per question. Each question contains an average of 4.55 pieces of implicit evidence, and solving the problem typically requires 7.62 logical jumps to find the correct answer. To enhance the performance of LLMs in evidence detection, this paper proposes Detective Reasoning Prompt and Finetune. Experiments demonstrate that the existing LLMs' abilities to detect evidence in long contexts are far inferior to humans. However, the Detective Reasoning Prompt effectively enhances the capability of powerful LLMs in evidence detection, while the Finetuning method shows significant effects in enhancing the performance of weaker LLMs. Moreover, when the abilities of LLMs in evidence detection are improved, their final reasoning performance is also enhanced accordingly.
翻译:在推理任务过程中,检测上下文中的证据是关键步骤。评估并增强大型语言模型(LLM)在证据检测方面的能力,将提升其基于上下文的推理性能。本文提出一个名为DetectBench的基准测试,用于验证模型在长上下文中检测与整合隐含证据的能力。DetectBench包含3,928道选择题,平均每道问题包含994个词元。每道问题平均蕴含4.55条隐含证据,通常需要经过7.62次逻辑跳跃才能找到正确答案。为提升LLM在证据检测方面的表现,本文提出侦探式推理提示与微调方法。实验表明,现有LLM在长上下文中检测证据的能力远逊于人类。然而,侦探式推理提示能有效增强高性能LLM的证据检测能力,而微调方法在提升较弱LLM性能方面效果显著。此外,当LLM的证据检测能力得到提升时,其最终推理性能也会相应增强。