Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The most common DP-SGD privacy accountants rely on Poisson subsampling to ensure the theoretical DP guarantees. Implementing computationally efficient DP-SGD with Poisson subsampling is not trivial, which leads many implementations to taking a shortcut by using computationally faster subsampling. We quantify the computational cost of training deep learning models under DP by implementing and benchmarking efficient methods with the correct Poisson subsampling. We find that using the naive implementation of DP-SGD with Opacus in PyTorch has a throughput between 2.6 and 8 times lower than that of SGD. However, efficient gradient clipping implementations like Ghost Clipping can roughly halve this cost. We propose an alternative computationally efficient implementation of DP-SGD with JAX that uses Poisson subsampling and performs comparably with efficient clipping optimizations based on PyTorch. We study the scaling behavior using up to 80 GPUs and find that DP-SGD scales better than SGD. We share our library at https://github.com/DPBayes/Towards-Efficient-Scalable-Training-DP-DL.
翻译:差分隐私随机梯度下降(DP-SGD)是在差分隐私(DP)约束下训练机器学习模型的标准算法。最常见的DP-SGD隐私计算器依赖泊松子采样来确保理论上的DP保证。实现具有泊松子采样的计算高效DP-SGD并非易事,这导致许多实现通过采用计算更快的子采样方式来走捷径。我们通过实现并基准测试采用正确泊松子采样的高效方法,量化了在DP约束下训练深度学习模型的计算成本。我们发现,在PyTorch中使用Opacus的DP-SGD朴素实现,其吞吐量比SGD低2.6至8倍。然而,像Ghost Clipping这样的高效梯度裁剪实现可以将此成本大致减半。我们提出了一种基于JAX的替代性计算高效DP-SGD实现,该实现使用泊松子采样,其性能与基于PyTorch的高效裁剪优化方案相当。我们使用多达80个GPU研究了扩展行为,发现DP-SGD比SGD具有更好的扩展性。我们的代码库公开在https://github.com/DPBayes/Towards-Efficient-Scalable-Training-DP-DL。