The integration of Large Language Models (LLMs) with Graph Neural Networks (GNNs) has recently been explored to enhance the capabilities of Text Attribute Graphs (TAGs). Most existing methods feed textual descriptions of the graph structure or neighbouring nodes' text directly into LLMs. However, these approaches often cause LLMs to treat structural information simply as general contextual text, thus limiting their effectiveness in graph-related tasks. In this paper, we introduce LanSAGNN (Language Semantic Anisotropic Graph Neural Network), a framework that extends the concept of anisotropic GNNs to the natural language level. This model leverages LLMs to extract tailor-made semantic information for node pairs, effectively capturing the unique interactions within node relationships. In addition, we propose an efficient dual-layer LLMs finetuning architecture to better align LLMs' outputs with graph tasks. Experimental results demonstrate that LanSAGNN significantly enhances existing LLM-based methods without increasing complexity while also exhibiting strong robustness against interference.
翻译:近年来,大型语言模型(LLMs)与图神经网络(GNNs)的融合已被探索,以增强文本属性图(TAGs)的处理能力。现有方法大多直接将图结构的文本描述或邻接节点的文本输入LLMs。然而,这些方法往往导致LLMs将结构信息仅视为一般上下文文本,从而限制了其在图相关任务中的有效性。本文提出LanSAGNN(语言语义各向异性图神经网络),该框架将各向异性GNN的概念扩展至自然语言层面。该模型利用LLMs为节点对提取定制化的语义信息,有效捕捉节点关系中的独特交互。此外,我们提出一种高效的双层LLMs微调架构,以更好地对齐LLMs的输出与图任务。实验结果表明,LanSAGNN在不增加复杂度的前提下显著提升了现有基于LLM的方法的性能,同时展现出强大的抗干扰鲁棒性。