Understanding how language model performance varies with scale is critical to benchmark and algorithm development. Scaling laws are one approach to building this understanding, but the requirement of training models across many different scales has limited their use. We propose an alternative, observational approach that bypasses model training and instead builds scaling laws from ~100 publically available models. Building a single scaling law from multiple model families is challenging due to large variations in their training compute efficiencies and capabilities. However, we show that these variations are consistent with a simple, generalized scaling law where language model performance is a function of a low-dimensional capability space, and model families only vary in their efficiency in converting training compute to capabilities. Using this approach, we show the surprising predictability of complex scaling phenomena: we show that several emergent phenomena follow a smooth, sigmoidal behavior and are predictable from small models; we show that the agent performance of models such as GPT-4 can be precisely predicted from simpler non-agentic benchmarks; and we show how to predict the impact of post-training interventions like Chain-of-Thought and Self-Consistency as language model capabilities continue to improve.
翻译:理解语言模型性能如何随规模变化对基准测试和算法开发至关重要。缩放定律是建立这种理解的一种方法,但由于需要在多种不同规模上训练模型,其应用受到限制。我们提出了一种替代性的观测方法,该方法绕过模型训练,转而基于约100个公开可用的模型构建缩放定律。由于不同模型系列在训练计算效率和能力上存在巨大差异,从多个模型系列构建单一缩放定律具有挑战性。然而,我们证明这些差异与一个简单的广义缩放定律一致,其中语言模型性能是低维能力空间的函数,而模型系列仅在将训练计算转化为能力的效率上有所不同。使用这种方法,我们展示了复杂缩放现象惊人的可预测性:我们证明几种涌现现象遵循平滑的S型行为,并且可以从小型模型预测;我们证明诸如GPT-4等模型的智能体性能可以从更简单的非智能体基准测试中精确预测;我们还展示了如何随着语言模型能力的持续提升,预测诸如思维链和自洽性等训练后干预措施的影响。