Despite the fact that experimental neural scaling laws have substantially guided empirical progress in large-scale machine learning, no existing theory can quantitatively predict the exponents of these important laws for any modern LLM trained on any natural language dataset. We provide the first such theory in the case of data-limited scaling laws. We isolate two key statistical properties of language that alone can predict neural scaling exponents: (i) the decay of pairwise token correlations with time separation between token pairs, and (ii) the decay of the next-token conditional entropy with the length of the conditioning context. We further derive a simple formula in terms of these statistics that predicts data-limited neural scaling exponents from first principles without any free parameters or synthetic data models. Our theory exhibits a remarkable match with experimentally measured neural scaling laws obtained from training GPT-2 and LLaMA style models from scratch on two qualitatively different benchmarks, TinyStories and WikiText.
翻译:尽管实验神经缩放定律在很大程度上指导了大规模机器学习的实证进展,但现有理论尚无法定量预测任何现代大语言模型在任何自然语言数据集上训练所得重要定律的指数。我们针对数据受限的缩放定律首次提出了此类理论。我们分离出语言的两个关键统计特性,仅凭这两者即可预测神经缩放指数:(i) 词元对相关性随时间间隔的衰减规律,以及 (ii) 下一词元条件熵随上下文长度的衰减规律。我们进一步推导出基于这些统计量的简明公式,该公式无需任何自由参数或合成数据模型,即可从第一性原理预测数据受限的神经缩放指数。我们的理论在实验测量的神经缩放定律中表现出显著一致性——通过在两个性质迥异的基准数据集(TinyStories 和 WikiText)上从头训练 GPT-2 和 LLaMA 风格模型获得。