Graph Prompt Learning (GPL) represents an innovative approach in graph representation learning, enabling task-specific adaptations by fine-tuning prompts without altering the underlying pre-trained model. Despite its growing prominence, the privacy risks inherent in GPL remain unexplored. In this study, we provide the first evaluation of privacy leakage in GPL across three attacker capabilities: black-box attacks when GPL as a service, and scenarios where node embeddings and prompt representations are accessible to third parties. We assess GPL's privacy vulnerabilities through Attribute Inference Attacks (AIAs) and Link Inference Attacks (LIAs), finding that under any capability, attackers can effectively infer the properties and relationships of sensitive nodes, and the success rate of inference on some data sets is as high as 98%. Importantly, while targeted inference attacks on specific prompts (e.g., GPF-plus) maintain high success rates, our analysis suggests that the prompt-tuning in GPL does not significantly elevate privacy risks compared to traditional GNNs. To mitigate these risks, we explored defense mechanisms, identifying that Laplacian noise perturbation can substantially reduce inference success, though balancing privacy protection with model performance remains challenging. This work highlights critical privacy risks in GPL, offering new insights and foundational directions for future privacy-preserving strategies in graph learning.
翻译:图提示学习(GPL)是图表示学习领域的一种创新方法,它通过微调提示词来实现任务特定适配,而无需修改底层预训练模型。尽管其重要性日益增长,但GPL固有的隐私风险尚未得到探索。在本研究中,我们首次评估了GPL在三种攻击者能力下的隐私泄露情况:将GPL作为服务时的黑盒攻击,以及节点嵌入和提示表示对第三方可访问的场景。我们通过属性推断攻击(AIA)和链接推断攻击(LIA)评估了GPL的隐私脆弱性,发现在任何能力下,攻击者都能有效推断敏感节点的属性和关系,且在某些数据集上的推断成功率高达98%。重要的是,尽管针对特定提示词(如GPF-plus)的定向推断攻击保持高成功率,但我们的分析表明,与传统图神经网络相比,GPL中的提示词微调并未显著增加隐私风险。为缓解这些风险,我们探索了防御机制,发现拉普拉斯噪声扰动能显著降低推断成功率,但在隐私保护与模型性能之间取得平衡仍具挑战性。这项工作揭示了GPL中关键的隐私风险,为未来图学习中的隐私保护策略提供了新见解和基础方向。