The adoption of Large Language Models (LLMs) is rapidly expanding across various tasks that involve inherent graphical structures. Graphs are integral to a wide range of applications, including motion planning for autonomous vehicles, social networks, scene understanding, and knowledge graphs. Many problems, even those not initially perceived as graph-based, can be effectively addressed through graph theory. However, when applied to these tasks, LLMs often encounter challenges, such as hallucinations and mathematical inaccuracies. To overcome these limitations, we propose Graph-Grounded LLMs, a system that improves LLM performance on graph-related tasks by integrating a graph library through function calls. By grounding LLMs in this manner, we demonstrate significant reductions in hallucinations and improved mathematical accuracy in solving graph-based problems, as evidenced by the performance on the NLGraph benchmark. Finally, we showcase a disaster rescue application where the Graph-Grounded LLM acts as a decision-support system.
翻译:大型语言模型(LLM)在涉及固有图形结构的各类任务中的应用正在迅速扩展。图在众多应用领域中不可或缺,包括自动驾驶车辆的运动规划、社交网络、场景理解以及知识图谱。许多问题,即使最初未被视作图相关问题,也能通过图论得到有效解决。然而,当应用于此类任务时,LLM常常面临幻觉和数学不准确性等挑战。为克服这些局限,我们提出了基于图结构的大语言模型(Graph-Grounded LLMs),该系统通过函数调用集成图库来提升LLM在图相关任务上的性能。通过以此方式将LLM基于图结构,我们证明了在解决图相关问题时,幻觉显著减少且数学准确性得到提升,这在NLGraph基准测试的性能表现中得到了验证。最后,我们展示了一个灾害救援应用,其中基于图结构的大语言模型充当了决策支持系统。