With the advancement of large language models (LLMs), significant progress has been achieved in various Natural Language Processing (NLP) tasks. However, existing LLMs still face two major challenges that hinder their broader adoption: (1) their responses tend to be generic and lack personalization tailored to individual users, and (2) they rely heavily on cloud infrastructure due to intensive computational requirements, leading to stable network dependency and response delay. Recent research has predominantly focused on either developing cloud-based personalized LLMs or exploring the on-device deployment of general-purpose LLMs. However, few studies have addressed both limitations simultaneously by investigating personalized on-device language models. To bridge this gap, we propose CDCDA-PLM, a framework for deploying personalized on-device language models on user devices with support from a powerful cloud-based LLM. Specifically, CDCDA-PLM leverages the server-side LLM's strong generalization capabilities to augment users' limited personal data, mitigating the issue of data scarcity. Using both real and synthetic data, A personalized on-device language models (LMs) is fine-tuned via parameter-efficient fine-tuning (PEFT) modules and deployed on users' local devices, enabling them to process queries without depending on cloud-based LLMs. This approach eliminates reliance on network stability and ensures high response speeds. Experimental results across six tasks in a widely used personalization benchmark demonstrate the effectiveness of CDCDA-PLM.
翻译:随着大语言模型(LLM)的进步,各种自然语言处理(NLP)任务取得了显著进展。然而,现有LLM仍面临两大挑战,阻碍了其更广泛的应用:(1)其响应往往较为通用,缺乏针对个体用户的个性化定制;(2)由于密集的计算需求,它们严重依赖云端基础设施,导致对稳定网络的依赖和响应延迟。近期的研究主要集中于开发基于云端的个性化LLM,或探索通用LLM在设备端的部署。然而,很少有研究通过探索个性化的设备端语言模型来同时解决这两个局限。为弥补这一空白,我们提出了CDCDA-PLM,这是一个在强大云端LLM支持下,于用户设备上部署个性化设备端语言模型的框架。具体而言,CDCDA-PLM利用服务器端LLM强大的泛化能力来增强用户有限的个人数据,从而缓解数据稀缺问题。使用真实和合成数据,一个个性化的设备端语言模型(LM)通过参数高效微调(PEFT)模块进行微调,并部署在用户的本地设备上,使其能够在不依赖云端LLM的情况下处理查询。这种方法消除了对网络稳定性的依赖,并确保了高响应速度。在一个广泛使用的个性化基准测试中,六个任务的实验结果证明了CDCDA-PLM的有效性。