Despite recent community revelations about the advancements and potential of Large Language Models (LLMs) in understanding Text-Attributed Graphs (TAG), the deployment of LLMs for production is hindered by their high computational and storage requirements, as well as long latencies during inference. Simultaneously, although traditional Graph Neural Networks (GNNs) are light weight and adept at learning structural features of graphs, their ability to grasp the complex semantics in TAGs is somewhat constrained for real applications. To address these limitations, we concentrate on the downstream task of node classification in TAG and propose a novel graph knowledge distillation framework, termed Linguistic Graph Knowledge Distillation (LinguGKD), using LLMs as teacher models and GNNs as student models for knowledge distillation. It involves TAG-oriented instruction tuning of LLM on designed node classification prompts, followed by aligning the hierarchically learned node features of the teacher LLM and the student GNN in latent space, employing a layer-adaptive contrastive learning strategy. Through extensive experiments on a variety of LLM and GNN models and multiple benchmark datasets, the proposed LinguGKD significantly boosts the student GNN's predictive accuracy and convergence rate, without the need of extra data or model parameters. Compared to teacher LLM, distilled GNN achieves superior inference speed equipped with much fewer computing and storage demands, when surpassing the teacher LLM's classification performance on some of benchmark datasets.
翻译:尽管近期社区揭示了大型语言模型(LLMs)在理解文本属性图(TAG)方面的进展与潜力,但LLMs在生产环境中的部署因其高昂的计算与存储需求以及推理过程中的长延迟而受到阻碍。同时,尽管传统图神经网络(GNNs)轻量且擅长学习图的結構特征,但其在真实应用中把握TAG复杂语义的能力在一定程度上受限。为应对这些局限,我们聚焦于TAG中节点分类的下游任务,并提出一种新型图知识蒸馏框架,称为语言图知识蒸馏(LinguGKD),以LLMs作为教师模型、GNNs作为学生模型进行知识蒸馏。该方法包括对LLM进行面向TAG的指令微调(基于设计的节点分类提示),随后采用层自适应对比学习策略,在潜在空间中对齐教师LLM与学生GNN分层学习的节点特征。通过在多种LLM与GNN模型以及多个基准数据集上的广泛实验,所提出的LinguGKD显著提升了学生GNN的预测准确率与收敛速度,且无需额外数据或模型参数。相较于教师LLM,蒸馏后的GNN在部分基准数据集上的分类性能超越教师LLM的同时,凭借更少的计算与存储需求实现了更快的推理速度。