Large Language Models (LLMs) have emerged as a milestone in artificial intelligence, and their performance can improve as the model size increases. However, this scaling brings great challenges to training and inference efficiency, particularly for deploying LLMs in resource-constrained environments, and the scaling trend is becoming increasingly unsustainable. This paper introduces the concept of ``\textit{capacity density}'' as a new metric to evaluate the quality of the LLMs across different scales and describes the trend of LLMs in terms of both effectiveness and efficiency. To calculate the capacity density of a given target LLM, we first introduce a set of reference models and develop a scaling law to predict the downstream performance of these reference models based on their parameter sizes. We then define the \textit{effective parameter size} of the target LLM as the parameter size required by a reference model to achieve equivalent performance, and formalize the capacity density as the ratio of the effective parameter size to the actual parameter size of the target LLM. Capacity density provides a unified framework for assessing both model effectiveness and efficiency. Our further analysis of recent open-source base LLMs reveals an empirical law (the densing law)that the capacity density of LLMs grows exponentially over time. More specifically, using some widely used benchmarks for evaluation, the capacity density of LLMs doubles approximately every three months. The law provides new perspectives to guide future LLM development, emphasizing the importance of improving capacity density to achieve optimal results with minimal computational overhead.
翻译:大型语言模型已成为人工智能领域的一个重要里程碑,其性能随着模型规模的增大而提升。然而,这种规模扩展给训练和推理效率带来了巨大挑战,尤其是在资源受限的环境中部署LLMs时,扩展趋势正变得日益不可持续。本文引入“容量密度”这一新指标来评估不同规模下LLMs的质量,并从有效性和效率两方面描述了LLMs的发展趋势。为计算给定目标LLM的容量密度,我们首先引入一组参考模型,并构建一个扩展定律来根据这些参考模型的参数量预测其下游性能。随后,我们将目标LLM的有效参数量定义为达到同等性能所需的参考模型的参数量,并将容量密度形式化为有效参数量与目标LLM实际参数量的比值。容量密度为评估模型有效性和效率提供了一个统一框架。我们对近期开源基础LLMs的进一步分析揭示了一个经验定律(致密化定律):LLMs的容量密度随时间呈指数增长。具体而言,使用一些广泛采用的基准进行评估,LLMs的容量密度大约每三个月翻一番。该定律为未来LLM的发展提供了新的视角,强调提升容量密度对于以最小计算开销获得最优结果的重要性。