Large language models (LLMs) have gained extended context windows through scaling positional encodings and lightweight continual pre-training. However, this often leads to degraded performance on short-text tasks, while the reasons for this degradation remain insufficiently explored. In this work, we identify two primary factors contributing to this issue: distribution drift in hidden states and attention scores, and catastrophic forgetting during continual pre-training. To address these challenges, we propose Long Context Pre-training with Restoration Distillation (LongReD), a novel approach designed to mitigate short-text performance degradation through minimizing the distribution discrepancy between the extended and original models. Besides training on long texts, LongReD distills the hidden state of selected layers from the original model on short texts. Additionally, LongReD also introduces a short-to-long distillation, aligning the output distribution on short texts with that on long texts by leveraging skipped positional indices. Experiments on common text benchmarks demonstrate that LongReD effectively preserves the model's short-text performance while maintaining comparable or even better capacity to handle long texts than baselines. Our code is available at https://github.com/RUCAIBox/LongReD.
翻译:大语言模型通过扩展位置编码和轻量级持续预训练获得了更长的上下文窗口。然而,这通常会导致模型在短文本任务上的性能下降,而性能下降的原因尚未得到充分探究。在本工作中,我们识别出导致此问题的两个主要因素:隐藏状态与注意力分数的分布漂移,以及持续预训练期间的灾难性遗忘。为应对这些挑战,我们提出了基于恢复蒸馏的长上下文预训练方法(LongReD),这是一种旨在通过最小化扩展模型与原始模型之间的分布差异来缓解短文本性能退化的新方法。除了在长文本上进行训练外,LongReD 还在短文本上蒸馏原始模型选定层的隐藏状态。此外,LongReD 还引入了短到长蒸馏,通过利用跳过的位置索引,将模型在短文本上的输出分布与在长文本上的输出分布对齐。在常见文本基准上的实验表明,LongReD 在保持与基线模型相当甚至更优的长文本处理能力的同时,有效地保留了模型的短文本性能。我们的代码公开于 https://github.com/RUCAIBox/LongReD。