The performance of differentially private machine learning can be boosted significantly by leveraging the transfer learning capabilities of non-private models pretrained on large public datasets. We critically review this approach. We primarily question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving. We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy. Beyond the privacy considerations of using public data, we further question the utility of this paradigm. We scrutinize whether existing machine learning benchmarks are appropriate for measuring the ability of pretrained models to generalize to sensitive domains, which may be poorly represented in public Web data. Finally, we notice that pretraining has been especially impactful for the largest available models -- models sufficiently large to prohibit end users running them on their own devices. Thus, deploying such models today could be a net loss for privacy, as it would require (private) data to be outsourced to a more compute-powerful third party. We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
翻译:利用在大型公共数据集上预训练的非隐私模型的迁移学习能力,可以显著提升差分隐私机器学习的性能。本文对此方法进行批判性审视。我们首先质疑:使用大规模网络爬取数据集是否应被视为符合差分隐私保护要求。我们警示,将这些基于网络数据预训练的模型宣传为“隐私保护”模型可能导致危害,并削弱公众对差分隐私作为有效隐私定义概念的信任。除使用公共数据的隐私考量外,我们进一步质疑该范式的实用性。我们审慎评估现有机器学习基准是否适用于衡量预训练模型向敏感领域泛化的能力——这些领域在公共网络数据中可能代表性不足。最后我们注意到,预训练对当前最大可用模型的影响尤为显著——这些模型的规模已大到终端用户无法在自有设备上运行。因此,部署此类模型可能对隐私保护造成净损失,因为这将需要(隐私)数据外包给计算能力更强的第三方。本文最后探讨了随着公共预训练日益普及和强大,隐私学习领域可能的发展路径。