We address the problem of extending a pretrained large language model to a new domain that was not seen at training time, like adding a language for which the original model has seen no or little training data. Popular solutions like fine-tuning or low-rank adaptation are successful at domain adaptation, but formally they do not add any extra capacity and degrade the performance in the original domain. Our paper analyzes this extension problem under three angles: data, architecture and training procedure, which are advantageously considered jointly. In particular, we improve adapters and make it possible to learn an entire new language while ensuring that the output of the neural network is almost unchanged in the original domain. For this purpose, we modify the new residual blocks in a way that leads each new residual block to output near-zeros in the original domain. This solution of neutral residues, which borrows architectural components from mixture of experts, is effective: with only 20% extra learnable weights compared to an original model trained on English, we get results that are significantly better than concurrent approaches (fine-tuning, low-rank or vanilla adapters) in terms of the trade-off between learning a new language and not forgetting English.
翻译:本文研究如何将预训练的大型语言模型扩展至训练时未见的新领域,例如增加原始模型训练数据极少或完全缺失的语言。微调或低秩适配等主流方法在领域适应方面虽有效,但严格而言并未增加额外模型容量,且会损害模型在原始领域的性能。本文从数据、架构与训练流程三个维度综合分析扩展问题,并论证三者协同考量的优势。特别地,我们改进了适配器设计,使其能够在确保神经网络在原始领域输出几乎不变的前提下,完整学习一门新语言。为此,我们通过调整新残差块的结构,使每个新增残差块在原始领域输出趋近于零。这种借鉴专家混合架构的中性残差解决方案效果显著:相较于原始英语训练模型仅增加20%可学习参数,我们在新语言学习与英语能力保持的权衡指标上,显著优于现有方法(微调、低秩适配或原始适配器)。