Graph generative diffusion models have recently emerged as a powerful paradigm for generating complex graph structures, effectively capturing intricate dependencies and relationships within graph data. However, the privacy risks associated with these models remain largely unexplored. In this paper, we investigate information leakage in such models through three types of black-box inference attacks. First, we design a graph reconstruction attack, which can reconstruct graphs structurally similar to those training graphs from the generated graphs. Second, we propose a property inference attack to infer the properties of the training graphs, such as the average graph density and the distribution of densities, from the generated graphs. Third, we develop two membership inference attacks to determine whether a given graph is present in the training set. Extensive experiments on three different types of graph generative diffusion models and six real-world graphs demonstrate the effectiveness of these attacks, significantly outperforming the baseline approaches. Finally, we propose two defense mechanisms that mitigate these inference attacks and achieve a better trade-off between defense strength and target model utility than existing methods. Our code is available at https://zenodo.org/records/17946102.
翻译:图生成扩散模型最近已成为生成复杂图结构的强大范式,能够有效捕捉图数据中错综复杂的依赖关系和关联性。然而,这些模型相关的隐私风险在很大程度上尚未得到探索。本文通过三种类型的黑盒推理攻击,研究了此类模型中的信息泄露问题。首先,我们设计了一种图重构攻击,该攻击能够从生成的图中重构出与训练图结构相似的图。其次,我们提出了一种属性推理攻击,旨在从生成的图中推断训练图的属性,例如平均图密度及密度分布。第三,我们开发了两种成员推理攻击,用于判断给定图是否存在于训练集中。在三种不同类型的图生成扩散模型和六个真实世界图数据上进行的大量实验证明了这些攻击的有效性,其性能显著优于基线方法。最后,我们提出了两种防御机制来缓解这些推理攻击,并在防御强度与目标模型效用之间取得了比现有方法更好的权衡。我们的代码可在 https://zenodo.org/records/17946102 获取。