Federated graph learning (FGL) has recently emerged as a promising privacy-preserving paradigm that enables distributed graph learning across multiple data owners. A critical privacy concern in federated learning is whether an adversary can recover raw data from shared gradients, a vulnerability known as deep leakage from gradients (DLG). However, most prior studies on the DLG problem focused on image or text data, and it remains an open question whether graphs can be effectively recovered, particularly when the graph structure and node features are uniquely entangled in GNNs. In this work, we first theoretically analyze the components in FGL and derive a crucial insight: once the graph structure is recovered, node features can be obtained through a closed-form recursive rule. Building on this analysis, we propose GraphDLG, a novel approach to recover raw training graphs from shared gradients in FGL, which can utilize randomly generated graphs or client-side training graphs as auxiliaries to enhance recovery. Extensive experiments demonstrate that GraphDLG outperforms existing solutions by successfully decoupling the graph structure and node features, achieving improvements of over 5.46% (by MSE) for node feature reconstruction and over 25.04% (by AUC) for graph structure reconstruction.
翻译:联邦图学习(FGL)作为一种新兴的隐私保护范式,能够在多个数据所有者之间实现分布式图学习。联邦学习中的一个关键隐私问题是攻击者能否从共享梯度中恢复原始数据,这一漏洞被称为梯度深度泄露(DLG)。然而,先前关于DLG问题的研究大多集中于图像或文本数据,图数据能否被有效恢复仍是一个开放性问题,尤其是在图神经网络(GNNs)中图结构与节点特征独特纠缠的情况下。本文首先对FGL中的构成要素进行了理论分析,并得出了一个关键结论:一旦图结构被恢复,节点特征可以通过一个闭式递归规则获得。基于此分析,我们提出了GraphDLG——一种从FGL共享梯度中恢复原始训练图的新方法,该方法可利用随机生成的图或客户端训练图作为辅助信息来增强恢复效果。大量实验表明,GraphDLG通过成功解耦图结构与节点特征,显著优于现有解决方案,在节点特征重建方面(以均方误差衡量)提升超过5.46%,在图结构重建方面(以AUC衡量)提升超过25.04%。