A collaboration between dataset owners and model owners is needed to facilitate effective machine learning (ML) training. During this collaboration, however, dataset owners and model owners want to protect the confidentiality of their respective assets (i.e., datasets, models and training code), with the dataset owners also caring about the privacy of individual users whose data is in their datasets. Existing solutions either provide limited confidentiality for models and training code, or suffer from privacy issues due to collusion. We present Citadel++, a scalable collaborative ML training system designed to simultaneously protect the confidentiality of datasets, models and training code, as well as the privacy of individual users. Citadel++ enhances differential privacy techniques to safeguard the privacy of individual user data while maintaining model utility. By employing Virtual Machine-level Trusted Execution Environments (TEEs) and improved integrity protection techniques through various OS-level mechanisms, Citadel++ effectively preserves the confidentiality of datasets, models and training code, and enforces our privacy mechanisms even when the models and training code have been maliciously designed. Our experiments show that Citadel++ provides privacy, model utility and performance while adhering to confidentiality and privacy requirements of dataset owners and model owners, outperforming the state-of-the-art privacy-preserving training systems by up to 543x on CPU and 113x on GPU TEEs.
翻译:为实现有效的机器学习(ML)训练,数据集所有者与模型所有者之间需要进行协作。然而,在此协作过程中,数据集所有者和模型所有者均希望保护各自资产(即数据集、模型与训练代码)的机密性,其中数据集所有者还关注其数据集中个体用户数据的隐私性。现有解决方案要么对模型和训练代码的机密性保护有限,要么因合谋问题而存在隐私风险。本文提出 Citadel++,一种可扩展的协作式机器学习训练系统,旨在同时保护数据集、模型与训练代码的机密性以及个体用户的隐私。Citadel++ 通过增强差分隐私技术,在维护模型效用的同时保护个体用户数据的隐私。通过采用虚拟机级可信执行环境(TEE)并结合多种操作系统级机制改进完整性保护技术,Citadel++ 有效保障了数据集、模型与训练代码的机密性,并确保即使模型与训练代码被恶意设计,我们的隐私保护机制仍能强制执行。实验表明,Citadel++ 在满足数据集所有者和模型所有者机密性与隐私要求的同时,实现了隐私保护、模型效用与性能的平衡,其性能在 CPU TEE 上超越现有隐私保护训练系统达 543 倍,在 GPU TEE 上达 113 倍。