As world knowledge advances and new task schemas emerge, Continual Learning (CL) becomes essential for keeping Large Language Models (LLMs) current and addressing their shortcomings. This process typically involves continual instruction tuning (CIT) and continual pre-training (CPT) to enable these models to adapt to novel tasks and acquire critical knowledge. However, collecting sufficient CPT data and efficiently bridging knowledge gaps remain significant challenges. Inspired by the 'summarizing mistakes' strategy, we propose the Continue Evolving from Mistakes (CEM) method, a data-efficient approach aiming to collect CPT data and continually improve LLMs' performance through iterative evaluation and supplementation with mistake-relevant knowledge. To further optimize data usage and mitigate forgetting, we introduce a novel training paradigm that combines CIT and CPT. Experiments show that CEM substantially enhances multiple models' performance on both in-domain and out-of-domain QA tasks, achieving gains of up to 29.63%. Code and datasets are available on https://anonymous.4open.science/r/cem-BB25.
翻译:随着世界知识的演进与新任务范式的出现,持续学习(Continual Learning, CL)对于保持大语言模型(Large Language Models, LLMs)的时效性并弥补其不足变得至关重要。该过程通常涉及持续指令微调(continual instruction tuning, CIT)与持续预训练(continual pre-training, CPT),以使模型能够适应新任务并获取关键知识。然而,收集充足的CPT数据并有效弥合知识鸿沟仍是重大挑战。受“总结错误”策略的启发,我们提出了从错误中持续进化(Continue Evolving from Mistakes, CEM)方法,这是一种数据高效的方法,旨在通过迭代评估并补充与错误相关的知识来收集CPT数据并持续提升LLMs的性能。为了进一步优化数据利用并缓解遗忘,我们引入了一种结合CIT与CPT的新型训练范式。实验表明,CEM显著提升了多个模型在领域内及领域外问答任务上的性能,最高可获得29.63%的性能增益。代码与数据集可在 https://anonymous.4open.science/r/cem-BB25 获取。