Graph representation learning (GRL) is critical for extracting insights from complex network structures, but it also raises security concerns due to potential privacy vulnerabilities in these representations. This paper investigates the structural vulnerabilities in graph neural models where sensitive topological information can be inferred through edge reconstruction attacks. Our research primarily addresses the theoretical underpinnings of similarity-based edge reconstruction attacks (SERA), furnishing a non-asymptotic analysis of their reconstruction capacities. Moreover, we present empirical corroboration indicating that such attacks can perfectly reconstruct sparse graphs as graph size increases. Conversely, we establish that sparsity is a critical factor for SERA's effectiveness, as demonstrated through analysis and experiments on (dense) stochastic block models. Finally, we explore the resilience of private graph representations produced via noisy aggregation (NAG) mechanism against SERA. Through theoretical analysis and empirical assessments, we affirm the mitigation of SERA using NAG . In parallel, we also empirically delineate instances wherein SERA demonstrates both efficacy and deficiency in its capacity to function as an instrument for elucidating the trade-off between privacy and utility.
翻译:图表示学习(GRL)对于从复杂网络结构中提取洞见至关重要,但由于这些表示中潜在的隐私漏洞,也引发了安全担忧。本文研究了图神经模型中的结构脆弱性,其中敏感的拓扑信息可通过边重构攻击被推断。我们的研究主要针对基于相似性的边重构攻击(SERA)的理论基础,提供了对其重构能力的非渐近分析。此外,我们提供了实证佐证,表明此类攻击能够随着图规模的增大而完美重构稀疏图。相反,我们通过(稠密)随机块模型的分析与实验证明,稀疏性是SERA有效性的关键因素。最后,我们探讨了通过噪声聚合(NAG)机制生成的私有图表示对SERA的抵御能力。通过理论分析和实证评估,我们证实了使用NAG可缓解SERA。同时,我们也通过实证描述了SERA作为阐明隐私与效用权衡的工具时,既表现出效力也存在不足的实例。