Wide deployment of machine learning models on edge devices has rendered the model intellectual property (IP) and data privacy vulnerable. We propose GNNVault, the first secure Graph Neural Network (GNN) deployment strategy based on Trusted Execution Environment (TEE). GNNVault follows the design of 'partition-before-training' and includes a private GNN rectifier to complement with a public backbone model. This way, both critical GNN model parameters and the private graph used during inference are protected within secure TEE compartments. Real-world implementations with Intel SGX demonstrate that GNNVault safeguards GNN inference against state-of-the-art link stealing attacks with negligible accuracy degradation (<2%).
翻译:机器学习模型在边缘设备上的广泛部署使得模型知识产权(IP)与数据隐私面临风险。本文提出GNNVault——首个基于可信执行环境(TEE)的安全图神经网络(GNN)部署方案。GNNVault遵循“训练前分区”的设计原则,通过私有GNN校正器与公共骨干模型协同工作,使得关键GNN模型参数及推理过程中使用的私有图数据均能受到安全TEE分区的保护。基于Intel SGX的真实场景实验表明,GNNVault能有效抵御最先进的链接窃取攻击,且精度损失可忽略不计(<2%)。