Graph data contains rich node features and unique edge information, which have been applied across various domains, such as citation networks or recommendation systems. Graph Neural Networks (GNNs) are specialized for handling such data and have shown impressive performance in many applications. However, GNNs may contain of sensitive information and susceptible to privacy attacks. For example, link stealing is a type of attack in which attackers infer whether two nodes are linked or not. Previous link stealing attacks primarily relied on posterior probabilities from the target GNN model, neglecting the significance of node features. Additionally, variations in node classes across different datasets lead to different dimensions of posterior probabilities. The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets. To address these challenges, we introduce Large Language Models (LLMs) to perform link stealing attacks on GNNs. LLMs can effectively integrate textual features and exhibit strong generalizability, enabling attacks to handle diverse data dimensions across various datasets. We design two distinct LLM prompts to effectively combine textual features and posterior probabilities of graph nodes. Through these designed prompts, we fine-tune the LLM to adapt to the link stealing attack task. Furthermore, we fine-tune the LLM using multiple datasets and enable the LLM to learn features from different datasets simultaneously. Experimental results show that our approach significantly enhances the performance of existing link stealing attack tasks in both white-box and black-box scenarios. Our method can execute link stealing attacks across different datasets using only a single model, making link stealing attacks more applicable to real-world scenarios.
翻译:图数据包含丰富的节点特征与独特的边信息,已广泛应用于引文网络、推荐系统等多个领域。图神经网络(GNNs)专门用于处理此类数据,并在诸多应用中展现出卓越性能。然而,GNNs可能包含敏感信息且易受隐私攻击。例如,链接窃取攻击即攻击者推断两个节点间是否存在连接的一类攻击。以往的链接窃取攻击主要依赖目标GNN模型的后验概率,忽视了节点特征的重要性。此外,不同数据集中节点类别的差异导致后验概率的维度各不相同。处理这些变化的数据维度对使用单一模型在不同数据集上有效实施链接窃取攻击构成了挑战。为应对这些挑战,我们引入大语言模型(LLMs)对GNNs执行链接窃取攻击。LLMs能够有效整合文本特征并展现出强大的泛化能力,使攻击能够处理不同数据集间的多样化数据维度。我们设计了两种不同的LLM提示,以有效结合图节点的文本特征与后验概率。通过这些设计的提示,我们对LLM进行微调,使其适应链接窃取攻击任务。此外,我们使用多个数据集对LLM进行微调,使其能够同时学习不同数据集的特征。实验结果表明,我们的方法在白盒与黑盒场景下均显著提升了现有链接窃取攻击任务的性能。本方法仅需单一模型即可跨不同数据集执行链接窃取攻击,使得链接窃取攻击更适用于现实场景。