Continual learning remains challenging across various natural language understanding tasks. When models are updated with new training data, they risk catastrophic forgetting of prior knowledge. In the present work, we introduce a discrete key-value bottleneck for encoder-only language models, allowing for efficient continual learning by requiring only localized updates. Inspired by the success of a discrete key-value bottleneck in vision, we address new and NLP-specific challenges. We experiment with different bottleneck architectures to find the most suitable variants regarding language, and present a generic discrete key initialization technique for NLP that is task independent. We evaluate the discrete key-value bottleneck in four continual learning NLP scenarios and demonstrate that it alleviates catastrophic forgetting. We showcase that it offers competitive performance to other popular continual learning methods, with lower computational costs.
翻译:持续学习在各种自然语言理解任务中仍然具有挑战性。当模型使用新的训练数据进行更新时,它们面临着灾难性遗忘先前知识的风险。在本研究中,我们为仅编码器语言模型引入了一种离散键值瓶颈,通过仅需局部更新即可实现高效的持续学习。受视觉领域中离散键值瓶颈成功的启发,我们解决了自然语言处理特有的新挑战。我们尝试了不同的瓶颈架构,以寻找最适合语言处理的变体,并提出了一种与任务无关的通用离散键初始化技术。我们在四种持续学习的自然语言处理场景中评估了离散键值瓶颈,并证明其能够缓解灾难性遗忘。我们展示了该方法与其他流行的持续学习方法相比具有竞争力的性能,同时计算成本更低。