Scaling laws for language models have often focused on finding the optimal model size and token count for training from scratch. However, achieving this optimal balance requires significant compute resources due to the extensive data demands when training models from randomly-initialized weights. Continued pretraining offers a cost-effective alternative, leveraging the compute investment from pretrained models to incorporate new knowledge without requiring extensive new data. Recent findings suggest that data quality influences constants in scaling laws, thereby altering the optimal parameter-token allocation ratio. Building on this insight, we investigate the interplay between domain specialization and model size during continued pretraining under compute-constrained scenarios. Our goal is to identify an optimal training regime for this scenario and detect patterns in this interplay that can be generalized across different model sizes and domains. To compare general and specialized training, we filtered a web-based dataset to extract data from three domains: legal, medical, and accounting. We pretrained models with 1.5B, 3B, 7B, and 14B parameters on both the unfiltered and filtered datasets, then evaluated their performance on domain-specific exams. Results show that as model size increases, specialized models outperform general models while requiring less training compute. Additionally, their growing compute efficiency leads to reduced forgetting of previously learned knowledge.
翻译:语言模型的缩放定律通常聚焦于寻找从头训练时的最优模型规模与训练标记数量。然而,由于从随机初始化权重开始训练模型需要海量数据,实现这种最优平衡需要巨大的计算资源。持续预训练提供了一种更具成本效益的替代方案,它利用预训练模型已有的计算投资来融入新知识,而无需大量新数据。近期研究表明,数据质量会影响缩放定律中的常数项,从而改变参数与标记分配的最优比例。基于此洞见,我们在计算资源受限的场景下,研究了持续预训练过程中领域专业化与模型规模之间的相互作用关系。我们的目标是为此场景确定一个最优的训练方案,并识别这种相互作用中可推广至不同模型规模和领域的规律。为了比较通用训练与专业化训练,我们过滤了一个基于网络的数据集,从中提取了法律、医学和会计三个领域的数据。我们分别使用未过滤数据集和过滤后的数据集,对参数规模为15亿、30亿、70亿和140亿的模型进行了预训练,随后在领域专业考试上评估了它们的性能。结果表明,随着模型规模增大,专业化模型在需要更少训练计算量的同时,性能超越了通用模型。此外,它们不断提升的计算效率也减少了对先前已学知识的遗忘。