Building effective and efficient Transformer-based large language models (LLMs) has recently become a research focus, requiring maximizing model language capabilities and minimizing training and deployment costs. Existing efforts have primarily described complex relationships among model performance, parameter size, and data size, as well as searched for the optimal compute allocation to train LLMs. However, they overlook the impacts of context length and attention head configuration (the number of query and key-value heads in grouped-query attention) on training and inference. In this paper, we systematically compare models with different parameter sizes, context lengths, and attention head configurations in terms of model performance, computational cost, and memory cost. Then, we extend the existing scaling methods, which are based solely on parameter size and training compute, to guide the construction of cost-optimal LLMs during both training and inference. Our quantitative scaling studies show that, when processing sufficiently long sequences, a larger model with fewer attention heads can achieve a lower loss while incurring lower computational and memory costs. Our findings provide valuable insights for developing practical LLMs, especially in long-context processing scenarios. We will publicly release our code and data.
翻译:构建高效且有效的基于Transformer的大语言模型已成为近期研究热点,这要求在最大化模型语言能力的同时最小化训练与部署成本。现有研究主要描述了模型性能、参数量与数据规模之间的复杂关系,并探索了训练大语言模型的最优计算资源分配方案。然而,这些研究忽略了上下文长度及注意力头配置(分组查询注意力中查询头与键值头数量)对训练与推理的影响。本文系统比较了不同参数量、上下文长度及注意力头配置的模型在性能表现、计算成本与内存成本方面的差异。随后,我们扩展了现有仅基于参数量与训练计算量的缩放方法,以指导在训练与推理阶段构建成本最优的大语言模型。定量缩放研究表明,当处理足够长的序列时,具有较少注意力头的较大模型能够以更低的计算与内存成本实现更低的损失值。我们的发现为开发实用化大语言模型,特别是在长上下文处理场景中,提供了重要参考。我们将公开代码与数据。