The training phase of deep neural networks requires substantial resources and as such is often performed on cloud servers. However, this raises privacy concerns when the training dataset contains sensitive content, e.g., facial or medical images. In this work, we propose a method to perform the training phase of a deep learning model on both an edge device and a cloud server that prevents sensitive content being transmitted to the cloud while retaining the desired information. The proposed privacy-preserving method uses adversarial early exits to suppress the sensitive content at the edge and transmits the task-relevant information to the cloud. This approach incorporates noise addition during the training phase to provide a differential privacy guarantee. We extensively test our method on different facial and medical datasets with diverse attributes using various deep learning architectures, showcasing its outstanding performance. We also demonstrate the effectiveness of privacy preservation through successful defenses against different white-box, deep and GAN-based reconstruction attacks. This approach is designed for resource-constrained edge devices, ensuring minimal memory usage and computational overhead.
翻译:深度神经网络的训练阶段需要大量计算资源,因此通常在云服务器上执行。然而,当训练数据包含敏感内容(如人脸或医学影像)时,这种做法会引发隐私担忧。本研究提出一种在边缘设备与云服务器协同执行深度学习模型训练的方法,该方法能在防止敏感内容传输至云端的同时保留任务所需信息。所提出的隐私保护方法利用对抗性早期退出机制在边缘端抑制敏感内容,并将任务相关信息传输至云端。该方法在训练阶段引入噪声添加机制以提供差分隐私保障。我们使用多种深度学习架构,在不同属性的人脸与医学数据集上对本方法进行了全面测试,结果展现了其卓越性能。通过成功防御多种白盒攻击、深度重构攻击及基于生成对抗网络的重构攻击,我们验证了该方法的隐私保护有效性。本方案专为资源受限的边缘设备设计,确保内存占用与计算开销最小化。