Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights -- causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks.
翻译:现有的大语言模型适应新任务的方法不适合多任务适应,因为它们会修改模型所有权重——导致任务间的破坏性干扰。由此产生的效应,例如对先前任务的灾难性遗忘,使得同时获得多个任务的良好性能具有挑战性。为了缓解这一问题,我们提出了彩票票适应,这是一种稀疏适应方法,仅识别并优化模型的一个稀疏子网络。我们在指令遵循、推理、数学和摘要等一系列具有挑战性的任务上评估了LoTA。LoTA获得了比全微调和低秩适应更好的性能,并且在训练其他任务后仍能保持良好的性能——从而避免了灾难性遗忘。通过提取并基于彩票票进行微调,LoTA还能够实现高度不同任务间的模型合并。