Large language models have consistently demonstrated remarkable performance across a wide spectrum of applications. Nonetheless, the deployment of these models can inadvertently expose user privacy to potential risks. The substantial memory demands of these models during training represent a significant resource consumption challenge. The sheer size of these models imposes a considerable burden on memory resources, which is a matter of significant concern in practice. In this paper, we present an innovative training framework MemDPT that not only reduces the memory cost of large language models but also places a strong emphasis on safeguarding user data privacy. MemDPT provides edge network and reverse network designs to accommodate various differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves $2 \sim 3 \times$ memory optimization but also provides robust privacy protection, ensuring that user data remains secure and confidential. Extensive experiments have demonstrated that MemDPT can effectively provide differential privacy efficient fine-tuning across various task scenarios.
翻译:大型语言模型在广泛的应用领域中持续展现出卓越的性能。然而,这些模型的部署可能无意中将用户隐私暴露于潜在风险之中。这些模型在训练过程中巨大的内存需求构成了显著的资源消耗挑战。这些模型的庞大规模对内存资源造成了相当大的负担,这在实践中是一个值得重点关注的问题。本文提出了一种创新的训练框架MemDPT,该框架不仅能够降低大型语言模型的内存开销,同时高度重视用户数据隐私的保护。MemDPT通过边缘网络和反向网络设计来适配多种差分隐私内存高效微调方案。我们的方法不仅实现了$2 \sim 3 \times$的内存优化,还提供了强大的隐私保护能力,确保用户数据的安全性与机密性。大量实验表明,MemDPT能够在多种任务场景下有效实现差分隐私高效微调。