In the field of Question Answering (QA), unifying large language models (LLMs) with external databases has shown great success. However, these methods often fall short in providing the advanced reasoning needed for complex QA tasks. To address these issues, we improve over a novel approach called Knowledge Graph Prompting (KGP), which combines knowledge graphs with a LLM-based agent to improve reasoning and search accuracy. Nevertheless, the original KGP framework necessitates costly fine-tuning with large datasets yet still suffers from LLM hallucination. Therefore, we propose a reasoning-infused LLM agent to enhance this framework. This agent mimics human curiosity to ask follow-up questions to more efficiently navigate the search. This simple modification significantly boosts the LLM performance in QA tasks without the high costs and latency associated with the initial KGP framework. Our ultimate goal is to further develop this approach, leading to more accurate, faster, and cost-effective solutions in the QA domain.
翻译:在问答领域,将大型语言模型与外部数据库相结合已取得显著成功。然而,这些方法在处理复杂问答任务所需的高级推理能力方面往往存在不足。为解决这些问题,我们改进了一种名为知识图谱提示(KGP)的新方法,该方法将知识图谱与基于LLM的智能体相结合,以提升推理和搜索准确性。但原始的KGP框架需要耗费大量成本进行大规模数据集微调,且仍面临LLM幻觉问题。为此,我们提出一种注入推理能力的LLM智能体以增强该框架。该智能体模拟人类好奇心机制,通过提出后续问题更高效地引导搜索过程。这一简单改进显著提升了LLM在问答任务中的性能,同时避免了原始KGP框架的高成本和延迟。我们的最终目标是进一步优化该方法,从而在问答领域实现更精准、更快速且更具成本效益的解决方案。