Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for adapting pre-trained language models to various tasks efficiently. Recently, there has been a growing interest in transferring knowledge from one or multiple tasks to the downstream target task to achieve performance improvements. However, current approaches typically either train adapters on individual tasks or distill shared knowledge from source tasks, failing to fully exploit task-specific knowledge and the correlation between source and target tasks. To overcome these limitations, we propose PEMT, a novel parameter-efficient fine-tuning framework based on multi-task transfer learning. PEMT extends the mixture-of-experts (MoE) framework to capture the transferable knowledge as a weighted combination of adapters trained on source tasks. These weights are determined by a gated unit, measuring the correlation between the target and each source task using task description prompt vectors. To fully exploit the task-specific knowledge, we also propose the Task Sparsity Loss to improve the sparsity of the gated unit. We conduct experiments on a broad range of tasks over 17 datasets. The experimental results demonstrate our PEMT yields stable improvements over full fine-tuning, and state-of-the-art PEFT and knowledge transferring methods on various tasks. The results highlight the effectiveness of our method which is capable of sufficiently exploiting the knowledge and correlation features across multiple tasks.
翻译:参数高效微调已成为将预训练语言模型高效适配至各类任务的有效方法。近期,研究者日益关注如何从一个或多个源任务向目标任务迁移知识以提升性能。然而,现有方法通常仅在单个任务上训练适配器,或从源任务中蒸馏共享知识,未能充分利用任务特定知识及源任务与目标任务间的关联性。为克服这些局限,我们提出PEMT——一种基于多任务迁移学习的新型参数高效微调框架。PEMT扩展了专家混合框架,通过源任务适配器的加权组合来捕获可迁移知识。权重由门控单元确定,该单元使用任务描述提示向量度量目标任务与各源任务间的相关性。为充分挖掘任务特定知识,我们进一步提出任务稀疏性损失以提升门控单元的稀疏性。我们在17个数据集的广泛任务上进行了实验。结果表明,PEMT相较于全参数微调、现有参数高效微调方法及知识迁移方法,在各种任务上均能取得稳定提升。这些结果凸显了本方法能够充分挖掘跨多任务的知识与关联特征的有效性。