Fine-tuning large pre-trained foundation models (FMs) on distributed edge devices presents considerable computational and privacy challenges. Federated fine-tuning (FedFT) mitigates some privacy issues by facilitating collaborative model training without the need to share raw data. To lessen the computational burden on resource-limited devices, combining low-rank adaptation (LoRA) with federated learning enables parameter-efficient fine-tuning. Additionally, the split FedFT architecture partitions an FM between edge devices and a central server, reducing the necessity for complete model deployment on individual devices. However, the risk of privacy eavesdropping attacks in FedFT remains a concern, particularly in sensitive areas such as healthcare and finance. In this paper, we propose a split FedFT framework with differential privacy (DP) over wireless networks, where the inherent wireless channel noise in the uplink transmission is utilized to achieve DP guarantees without adding an extra artificial noise. We shall investigate the impact of the wireless noise on convergence performance of the proposed framework. We will also show that by updating only one of the low-rank matrices in the split FedFT with DP, the proposed method can mitigate the noise amplification effect. Simulation results will demonstrate that the proposed framework achieves higher accuracy under strict privacy budgets compared to baseline methods.
翻译:在分布式边缘设备上对大型预训练基础模型进行微调带来了显著的计算和隐私挑战。联邦微调通过促进协作式模型训练而无需共享原始数据,缓解了一些隐私问题。为减轻资源受限设备的计算负担,将低秩自适应与联邦学习相结合可实现参数高效的微调。此外,分割式联邦微调架构将基础模型划分在边缘设备与中央服务器之间,减少了在单个设备上完整部署模型的需求。然而,联邦微调中隐私窃听攻击的风险仍然令人担忧,特别是在医疗和金融等敏感领域。本文提出了一种无线网络中基于差分隐私的分割式联邦微调框架,该框架利用上行链路传输中固有的无线信道噪声来实现差分隐私保证,而无需额外添加人工噪声。我们将研究无线噪声对所提框架收敛性能的影响。我们还将证明,在具有差分隐私的分割式联邦微调中仅更新一个低秩矩阵,所提方法能够缓解噪声放大效应。仿真结果表明,在严格的隐私预算下,与基线方法相比,所提框架实现了更高的准确率。