Despite the impressive capabilities of large language models across various tasks, their continued scaling is severely hampered not only by data scarcity but also by the performance degradation associated with excessive data repetition during training. To overcome this critical bottleneck, we propose the Massive Genre-Audience(MGA) reformulation method, a lightweight and scalable data augmentation technique inspired by synthetic data methodologies. MGA systematically reformulates existing corpora into diverse, contextually-rich variations to mitigate the negative effects of repetition, and we introduce this approach along with the resulting 770 billion token MGACorpus in this work. We experimentally validate its core benefit by demonstrating superior performance against data repetition and upsampling in scaling scenarios (up to 13B parameters). Furthermore, comprehensive analysis investigates the role of prompt engineering in generation quality and reveals nuances in evaluating model capabilities using standard loss metrics. Our work shows that MGA provides a reliable pathway to substantially augment training datasets, effectively alleviating repetition bottlenecks and enabling more efficient scaling of large language models.
翻译:尽管大型语言模型在各种任务中展现出令人印象深刻的能力,但其持续扩展不仅受到数据稀缺的严重制约,还受到训练过程中过度数据重复导致的性能退化影响。为突破这一关键瓶颈,我们提出了大规模体裁-受众(MGA)重新表述方法——一种受合成数据方法启发的轻量级可扩展数据增强技术。MGA系统地将现有语料库重新表述为多样化、上下文丰富的变体以缓解重复的负面影响,我们在本工作中同步介绍了该方法及由此产生的7700亿词元MGACorpus。我们通过实验验证了其核心优势:在扩展场景(最高达130亿参数)中相较于数据重复和上采样方法展现出更优性能。此外,综合分析探究了提示工程在生成质量中的作用,并揭示了使用标准损失指标评估模型能力时的细微差别。研究表明,MGA为大幅扩充训练数据集提供了可靠路径,能有效缓解重复瓶颈,实现大型语言模型更高效的扩展。