Large Language Models (LLMs) have achieved impressive results in processing text data, which has sparked interest in applying these models beyond textual data, such as graphs. In the field of graph learning, there is a growing interest in harnessing LLMs to comprehend and manipulate graph-structured data. Existing research predominantly focuses on graphs with rich textual features, such as knowledge graphs or text attribute graphs, leveraging LLMs' ability to process text but inadequately addressing graph structure. This work specifically aims to assess and enhance LLMs' abilities to comprehend and utilize the structural knowledge inherent in graph data itself, rather than focusing solely on graphs rich in textual content. To achieve this, we introduce the \textbf{G}raph \textbf{U}nderstanding for \textbf{N}atural Language \textbf{D}riven \textbf{A}nalytical \textbf{M}odel (\model). This model adapts LLMs to better understand and engage with the structure of graph data, enabling them to perform complex reasoning tasks by leveraging the graph's structure itself. Our experimental evaluations on graph reasoning benchmarks not only substantiate that \model~ outperforms the SOTA baselines for comparisons. But also reveals key factors affecting the graph reasoning capabilities of LLMs. Moreover, we provide a theoretical analysis illustrating how reasoning paths can enhance LLMs' reasoning capabilities.
翻译:大语言模型(LLMs)在处理文本数据方面取得了令人瞩目的成果,这激发了人们将这些模型应用于文本以外数据(如图数据)的兴趣。在图学习领域,利用LLMs来理解和处理图结构数据的研究日益增多。现有研究主要关注具有丰富文本特征的图,如知识图谱或文本属性图,虽然充分利用了LLMs处理文本的能力,但未能充分处理图结构本身。本研究特别旨在评估和增强LLMs理解和利用图数据固有结构知识的能力,而非仅关注富含文本内容的图。为实现这一目标,我们提出了基于自然语言驱动的图理解分析模型(\model)。该模型通过适配LLMs,使其能更好地理解和处理图数据的结构,从而借助图结构本身执行复杂推理任务。我们在图推理基准测试中的实验评估不仅证实了\model~优于当前最先进的基线模型,还揭示了影响LLMs图推理能力的关键因素。此外,我们通过理论分析说明了推理路径如何增强LLMs的推理能力。