Large Language Models (LLMs) have emerged as a milestone in artificial intelligence, and their performance can improve as the model size increases. However, this scaling brings great challenges to training and inference efficiency, particularly for deploying LLMs in resource-constrained environments, and the scaling trend is becoming increasingly unsustainable. This paper introduces the concept of ``\textit{capacity density}'' as a new metric to evaluate the quality of the LLMs across different scales and describes the trend of LLMs in terms of both effectiveness and efficiency. To calculate the capacity density of a given target LLM, we first introduce a set of reference models and develop a scaling law to predict the downstream performance of these reference models based on their parameter sizes. We then define the \textit{effective parameter size} of the target LLM as the parameter size required by a reference model to achieve equivalent performance, and formalize the capacity density as the ratio of the effective parameter size to the actual parameter size of the target LLM. Capacity density provides a unified framework for assessing both model effectiveness and efficiency. Our further analysis of recent open-source base LLMs reveals an empirical law (the densing law)that the capacity density of LLMs grows exponentially over time. More specifically, using some widely used benchmarks for evaluation, the capacity density of LLMs doubles approximately every three months. The law provides new perspectives to guide future LLM development, emphasizing the importance of improving capacity density to achieve optimal results with minimal computational overhead.
翻译:大型语言模型已成为人工智能领域的一个重要里程碑,其性能可随模型规模增大而提升。然而,这种扩展给训练和推理效率带来了巨大挑战,特别是在资源受限环境中部署LLMs时,扩展趋势正变得日益不可持续。本文引入“容量密度”的概念,作为评估不同规模LLM质量的新指标,并从效能与效率两方面描述LLMs的发展趋势。为计算给定目标LLM的容量密度,我们首先引入一组参考模型,并建立扩展定律以根据其参数量预测这些参考模型的下游性能。随后,我们将目标LLM的“有效参数量”定义为达到同等性能所需参考模型的参数量,并将容量密度形式化为有效参数量与目标LLM实际参数量的比值。容量密度为评估模型效能与效率提供了统一框架。我们对近期开源基础LLMs的进一步分析揭示了一个经验规律(致密化定律):LLMs的容量密度随时间呈指数增长。具体而言,基于若干广泛使用的评估基准,LLMs的容量密度约每三个月翻一番。该定律为未来LLM发展提供了新视角,强调提升容量密度对于以最小计算开销获得最优结果的重要性。