Large language models (LLMs) have achieved commendable accomplishments in various natural language processing tasks. However, LLMs still encounter significant challenges when dealing with complex scenarios involving multiple entities. These challenges arise from the presence of implicit relationships that demand multi-step reasoning. In this paper, we propose a novel approach ERA-CoT, which aids LLMs in understanding context by capturing relationships between entities and supports the reasoning of diverse tasks through Chain-of-Thoughts (CoT). Experimental results show that ERA-CoT demonstrates the superior performance of our proposed method compared to current CoT prompting methods, achieving a significant improvement of an average of 5.1\% on GPT3.5 compared to previous SOTA baselines. Our analysis indicates that ERA-CoT increases the LLM's understanding of entity relationships, significantly improves the accuracy of question answering, and enhances the reasoning ability of LLMs.
翻译:大型语言模型(LLM)在各种自然语言处理任务中取得了令人瞩目的成就。然而,在处理涉及多个实体的复杂场景时,LLM仍面临重大挑战。这些挑战源于需要多步推理的隐含关系。本文提出了一种新方法ERA-CoT,该方法通过捕获实体间关系来帮助LLM理解上下文,并通过思维链(CoT)支持多样化任务的推理。实验结果表明,与当前的CoT提示方法相比,ERA-CoT展现了所提方法的优越性能,在GPT3.5上相较于之前的SOTA基线平均实现了5.1%的显著提升。我们的分析表明,ERA-CoT增强了LLM对实体关系的理解,显著提高了问答的准确性,并提升了LLM的推理能力。