Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. However, due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacy-preserving settings when several parties need to train a shared global model collaboratively. Although several research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Our experiments show that the DBA attack success rate is higher than CBA in almost all evaluated cases. For CBA, the attack success rate of all local triggers is similar to the global trigger even if the training set of the adversarial party is embedded with the global trigger. To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities. Moreover, we explore the robustness of DBA and CBA against one defense. We find that both attacks are robust against the investigated defense, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat that requires custom defenses.
翻译:图神经网络(GNNs)是一类基于深度学习的图域信息处理方法。由于其学习复杂图数据表示的卓越能力,GNNs 近年来已成为广泛使用的图分析方法。然而,出于隐私顾虑和监管限制,集中式 GNNs 难以应用于数据敏感场景。联邦学习(FL)是一种为隐私保护场景而新兴发展的技术,适用于多方需协作训练共享全局模型的情况。尽管已有若干研究工作将 FL 应用于训练 GNNs(联邦 GNNs),但目前尚无研究探讨其对后门攻击的鲁棒性。本文通过在联邦 GNNs 中实施两类后门攻击来填补这一空白:集中式后门攻击(CBA)与分布式后门攻击(DBA)。实验表明,在几乎所有评估场景中,DBA 的攻击成功率均高于 CBA。对于 CBA,即使对抗参与方的训练数据中嵌入了全局触发器,所有局部触发器的攻击成功率仍与全局触发器相近。为深入探究联邦 GNNs 中两类后门攻击的特性,我们评估了不同客户端数量、触发器尺寸、投毒强度及触发器密度下的攻击性能。此外,我们还探究了 DBA 与 CBA 针对一种防御方法的鲁棒性。研究发现,两种攻击对所考察的防御方法均表现出强鲁棒性,这表明必须将联邦 GNNs 中的后门攻击视为需要定制化防御的新型威胁。