AI foundation models have recently demonstrated impressive capabilities across a wide range of tasks. Fine-tuning (FT) is a method of customizing a pre-trained AI foundation model by further training it on a smaller, targeted dataset. In this paper, we initiate the study of the Privacy-Preserving Parameter-Efficient FT (P3EFT) framework, which can be viewed as the intersection of Parameter-Efficient FT (PEFT) and Privacy-Preserving FT (PPFT). PEFT modifies only a small subset of the model's parameters to achieve FT (i.e., adapting a pre-trained model to a specific dataset), while PPFT uses privacy-preserving technologies to protect the confidentiality of the model during the FT process. There have been many studies on PEFT or PPFT but very few on their fusion, which motivates our work on P3EFT to achieve both parameter efficiency and model privacy. To exemplify our P3EFT, we present the PrivTuner scheme, which incorporates Fully Homomorphic Encryption (FHE) enabled privacy protection into LoRA (short for ``Low-Rank Adapter''). Intuitively speaking, PrivTuner allows the model owner and the external data owners to collaboratively implement PEFT with encrypted data. After describing PrivTuner in detail, we further investigate its energy consumption and privacy protection. Then, we consider a PrivTuner system over wireless communications and formulate a joint optimization problem to adaptively minimize energy while maximizing privacy protection, with the optimization variables including FDMA bandwidth allocation, wireless transmission power, computational resource allocation, and privacy protection. A resource allocation algorithm is devised to solve the problem. Experiments demonstrate that our algorithm can significantly reduce energy consumption while adapting to different privacy requirements.
翻译:AI基础模型近期在广泛任务中展现出卓越能力。微调是一种通过在小规模目标数据集上进一步训练来定制预训练AI基础模型的方法。本文首次提出隐私保护参数高效微调框架的研究,该框架可视为参数高效微调与隐私保护微调的交集。参数高效微调仅修改模型参数的少量子集以实现微调(即让预训练模型适应特定数据集),而隐私保护微调则采用隐私保护技术来保障微调过程中模型的机密性。现有研究多集中于参数高效微调或隐私保护微调的单一方向,但二者融合的研究甚少,这促使我们开展同时实现参数效率与模型隐私的P3EFT研究。为具体实现P3EFT,我们提出PrivTuner方案,该方案将全同态加密启用的隐私保护机制融入LoRA(即“低秩适配器”)。直观而言,PrivTuner允许模型所有者与外部数据所有者协同使用加密数据实现参数高效微调。在详细阐述PrivTuner后,我们进一步探究其能耗特性与隐私保护效能。随后,我们构建基于无线通信的PrivTuner系统,并建立联合优化问题以自适应实现能耗最小化与隐私保护最大化,优化变量包括频分多址带宽分配、无线传输功率、计算资源分配及隐私保护强度。我们设计了资源分配算法求解该问题。实验表明,所提算法能在适应不同隐私需求的同时显著降低能耗。