Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a Long-context data selection framework with Attention-based Dependency Measurement (LADM), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
翻译:长上下文建模在大型语言模型领域受到越来越多的关注。使用长上下文数据进行持续训练已成为赋予LLMs处理长输入能力的实际方法。然而,如何衡量长上下文训练数据的质量仍是一个开放挑战。为解决此问题,我们提出了一种基于注意力依赖度量的长上下文数据选择框架,该框架能够从大规模、多领域的预训练语料库中高效识别高质量的长上下文数据。LADM利用注意力机制的检索能力来捕捉上下文依赖关系,确保对长上下文数据进行全面的质量评估。实验结果表明,我们的LADM框架仅使用10亿token进行持续训练,就能显著提升LLMs在多个长上下文任务上的性能。