Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks. Increasing instruction data substantially is a direct solution to align the model with a broader range of downstream tasks or notably improve its performance on a specific task. However, we find that large-scale increases in instruction data can damage the world knowledge previously stored in LLMs. To address this challenge, we propose LoRAMoE, a novelty framework that introduces several low-rank adapters (LoRA) and integrates them by using a router network, like a plugin version of Mixture of Experts (MoE). It freezes the backbone model and forces a portion of LoRAs to focus on leveraging world knowledge to solve downstream tasks, to alleviate world knowledge-edge forgetting. Experimental results show that, as the instruction data increases, LoRAMoE can significantly improve the ability to process downstream tasks, while maintaining the world knowledge stored in the LLM.
翻译:监督微调(SFT)是大语言模型(LLM)的关键步骤,使其能够对齐人类指令并增强在下游任务中的能力。大幅增加指令数据是直接扩展模型适配更广泛下游任务或显著提升特定任务性能的常用方法。然而,我们发现指令数据的大规模增加可能损害LLM先前存储的世界知识。为应对这一挑战,我们提出LoRAMoE这一创新框架,通过引入多个低秩适配器(LoRA)并借助路由网络进行集成,形成类似专家混合模型(MoE)的插件版本。该框架冻结主干模型,强制部分LoRA专注于利用世界知识解决下游任务,从而缓解世界知识遗忘。实验结果表明,随着指令数据的增加,LoRAMoE能显著提升处理下游任务的能力,同时保持LLM中存储的世界知识。