Scaling laws are a critical component of the LLM development pipeline, most famously as a way to forecast training decisions such as 'compute-optimally' trading-off parameter count and dataset size, alongside a more recent growing list of other crucial decisions. In this work, we ask whether compute-optimal scaling behaviour can be skill-dependent. In particular, we examine knowledge and reasoning-based skills such as knowledge-based QA and code generation, and we answer this question in the affirmative: scaling laws are skill-dependent. Next, to understand whether skill-dependent scaling is an artefact of the pretraining datamix, we conduct an extensive ablation of different datamixes and find that, also when correcting for datamix differences, knowledge and code exhibit fundamental differences in scaling behaviour. We conclude with an analysis of how our findings relate to standard compute-optimal scaling using a validation set, and find that a misspecified validation set can impact compute-optimal parameter count by nearly 50%, depending on its skill composition.
翻译:缩放定律是大型语言模型开发流程中的关键组成部分,最著名的是作为一种预测训练决策的方式,例如在参数数量与数据集大小之间进行“计算最优”权衡,以及近年来日益增多的其他关键决策。在本研究中,我们探讨计算最优缩放行为是否可能依赖于具体技能。我们特别考察了基于知识的技能(如知识问答)与基于推理的技能(如代码生成),并对此问题给出了肯定答案:缩放定律确实具有技能依赖性。接着,为探究技能依赖性缩放是否源于预训练数据混合的差异,我们对不同数据混合进行了广泛的消融实验,发现即使在修正数据混合差异后,知识类任务与代码类任务在缩放行为上仍存在本质区别。最后,我们通过验证集分析了这些发现与标准计算最优缩放的关系,结果表明:若验证集的技能构成设定不当,可能导致计算最优参数数量出现近50%的偏差。