Differential privacy (DP) has been integrated into graph neural networks (GNNs) to protect sensitive structural information, e.g., edges, nodes, and associated features across various applications. A prominent approach is to perturb the message-passing process, which forms the core of most GNN architectures. However, existing methods typically incur a privacy cost that grows linearly with the number of layers (e.g., GAP published in Usenix Security'23), ultimately requiring excessive noise to maintain a reasonable privacy level. This limitation becomes particularly problematic when multi-layer GNNs, which have shown better performance than one-layer GNN, are used to process graph data with sensitive information. In this paper, we theoretically establish that the privacy budget converges with respect to the number of layers by applying privacy amplification techniques to the message-passing process, exploiting the contractive properties inherent to standard GNN operations. Motivated by this analysis, we propose a simple yet effective Contractive Graph Layer (CGL) that ensures the contractiveness required for theoretical guarantees while preserving model utility. Our framework, CARIBOU, supports both training and inference, equipped with a contractive aggregation module, a privacy allocation module, and a privacy auditing module. Experimental evaluations demonstrate that CARIBOU significantly improves the privacy-utility trade-off and achieves superior performance in privacy auditing tasks.
翻译:差分隐私已被集成到图神经网络中,以保护各类应用中的敏感结构信息,例如边、节点及相关特征。一种主流方法是对构成大多数GNN架构核心的消息传递过程进行扰动。然而,现有方法通常会导致隐私成本随网络层数线性增长(如发表于Usenix Security'23的GAP),最终需要注入过量噪声以维持合理的隐私水平。当使用性能优于单层GNN的多层GNN处理含敏感信息的图数据时,这一局限性尤为突出。本文通过将隐私放大技术应用于消息传递过程,并利用标准GNN操作固有的收缩特性,从理论上证明了隐私预算会随网络层数增加而收敛。基于此分析,我们提出了一种简单而有效的收缩图层,该层在保持模型效用的同时,确保了理论证明所需的收缩性。我们的框架CARIBOU支持训练与推理,配备收缩聚合模块、隐私分配模块和隐私审计模块。实验评估表明,CARIBOU显著改善了隐私-效用权衡,并在隐私审计任务中取得了优越性能。