Pre-training exploits public datasets to pre-train an advanced machine learning model, so that the model can be easily tuned to adapt to various downstream tasks. Pre-training has been extensively explored to mitigate computation and communication resource consumption. Inspired by these advantages, we are the first to explore how model pre-training can mitigate noise detriment in differentially private federated learning (DPFL). DPFL is upgraded from federated learning (FL), the de-facto standard for privacy preservation when training the model across multiple clients owning private data. DPFL introduces differentially private (DP) noises to obfuscate model gradients exposed in FL, which however can considerably impair model accuracy. In our work, we compare head fine-tuning (HT) and full fine-tuning (FT), which are based on pre-training, with scratch training (ST) in DPFL through a comprehensive empirical study. Our experiments tune pre-trained models (obtained by pre-training on ImageNet-1K) with CIFAR-10, CHMNIST and Fashion-MNIST (FMNIST) datasets, respectively. The results demonstrate that HT and FT can significantly mitigate noise influence by diminishing gradient exposure times. In particular, HT outperforms FT when the privacy budget is tight or the model size is large. Visualization and explanation study further substantiates our findings. Our pioneering study introduces a new perspective on enhancing DPFL and expanding its practical applications.
翻译:预训练利用公共数据集预训练先进的机器学习模型,使得模型能够轻松微调以适应各种下游任务。预训练已被广泛探索用于减轻计算和通信资源消耗。受这些优势启发,我们首次探索了模型预训练如何缓解差分隐私联邦学习(DPFL)中的噪声损害。DPFL由联邦学习(FL)升级而来,后者是在拥有私有数据的多个客户端间训练模型时隐私保护的事实标准。DPFL引入差分隐私(DP)噪声以混淆FL中暴露的模型梯度,但这可能显著损害模型准确性。在本工作中,我们通过全面的实证研究,比较了基于预训练的头部微调(HT)和全参数微调(FT)与DPFL中的从头训练(ST)。我们的实验分别使用CIFAR-10、CHMNIST和Fashion-MNIST(FMNIST)数据集对预训练模型(通过在ImageNet-1K上预训练获得)进行微调。结果表明,HT和FT能通过减少梯度暴露次数显著缓解噪声影响。特别地,当隐私预算紧张或模型规模较大时,HT表现优于FT。可视化与解释性研究进一步证实了我们的发现。我们的开创性研究为增强DPFL并拓展其实际应用提供了新视角。