Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks. Extensive experiments validate the promising empirical and provable watermarking performance of FedGMark. Source code is available at: https://github.com/Yuxin104/FedGMark.
翻译:联邦图学习(FedGL)是一种新兴的学习范式,旨在协同训练来自不同客户端的图数据。然而,在FedGL模型的开发与部署过程中,它们容易遭受非法复制和模型窃取。基于后门的水印是一种广为人知的缓解此类攻击的方法,因为它为模型所有者提供了所有权验证。我们首次通过基于后门的水印技术来保护FedGL模型的所有权。现有技术在实现该目标时面临挑战:1)它们要么无法直接应用,要么性能表现不佳;2)它们易受水印移除攻击;3)它们缺乏形式化保证。为应对所有这些挑战,我们提出了FedGMark,这是首个面向FedGL的、具备可验证鲁棒性的基于后门的水印方案。FedGMark利用FedGL中独特的图结构和客户端信息来学习定制化且多样化的水印。它还设计了一种新颖的图学习架构,有助于防御经验性和理论最坏情况下的水印移除攻击。大量实验验证了FedGMark在经验性和可证明的水印性能方面的优异表现。源代码位于:https://github.com/Yuxin104/FedGMark。