Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered text understanding (HCTU) is challenging since user tokens are million-level in most personalized applications and do not have concrete explicit semantics. A standard and parameter-efficient approach (e.g., LoRA) necessitates memorizing numerous suits of adapters for each user. In this work, we introduce a personalized LoRA (PLoRA) with a plug-and-play (PnP) framework for the HCTU task. PLoRA is effective, parameter-efficient, and dynamically deploying in PLMs. Moreover, a personalized dropout and a mutual information maximizing strategies are adopted and hence the proposed PLoRA can be well adapted to few/zero-shot learning scenarios for the cold-start issue. Experiments conducted on four benchmark datasets show that the proposed method outperforms existing methods in full/few/zero-shot learning scenarios for the HCTU task, even though it has fewer trainable parameters. For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/PLoRA.
翻译:有效且高效地将预训练语言模型(PLM)适配到面向人的文本理解(HCTU)任务中具有挑战性,因为大多数个性化应用中的用户token数量达百万级且缺乏明确的具体语义。标准且参数高效的方法(如LoRA)需要为每个用户记忆多套适配器模块。本文针对HCTU任务提出了一种即插即用(PnP)框架下的个性化LoRA(PLoRA)。该方法既有效又参数高效,可动态部署于PLM中。此外,通过采用个性化dropout和互信息最大化策略,所提出的PLoRA能够良好适应少样本/零样本学习场景以解决冷启动问题。在四个基准数据集上的实验表明,尽管所提方法可训练参数更少,但在全样本/少样本/零样本学习场景下均优于现有HCTU方法。为保障可复现性,本文代码发布于:https://github.com/yoyo-yun/PLoRA。