The massive computational costs associated with large language model (LLM) pretraining have spurred great interest in reduced-precision floating-point representations to accelerate the process. As a result, the BrainFloat16 (BF16) precision has become the de facto standard for LLM training, with hardware support included in recent generations of accelerators. This trend has gone even further in the latest processors, where FP8 has recently been introduced. However, prior experience with FP16, which was found to be less stable than BF16, raises concerns as to whether FP8, with even fewer bits than FP16, can be a cost-effective option for LLM training. We argue that reduced-precision training schemes must have similar training stability and hyperparameter sensitivities to their higher-precision counterparts in order to be cost-effective. However, we find that currently available methods for FP8 training are not robust enough to allow their use as economical replacements. This prompts us to investigate the stability of reduced-precision LLM training in terms of robustness across random seeds, learning rates, and datasets. To this end, we propose new evaluation techniques and a new metric for quantifying loss landscape sharpness in autoregressive language models. By simulating incremental bit reductions in floating-point representations, we analyze the relationship between representational power and training stability with the intent of aiding future research into the field.
翻译:大型语言模型(LLM)预训练所伴随的巨大计算成本,激发了人们对采用降低精度的浮点表示以加速训练过程的浓厚兴趣。因此,BrainFloat16(BF16)精度已成为LLM训练的事实标准,并在最近几代加速器中获得了硬件支持。这一趋势在最新的处理器中更进一步,其中最近引入了FP8精度。然而,先前使用FP16的经验(其稳定性被认为低于BF16)引发了一个担忧:比FP16位数更少的FP8,能否成为LLM训练中具有成本效益的选择?我们认为,降低精度的训练方案必须具有与更高精度方案相似的训练稳定性和超参数敏感性,才能具备成本效益。但我们发现,目前可用的FP8训练方法尚不够鲁棒,无法使其成为经济的替代方案。这促使我们从随机种子、学习率和数据集的鲁棒性角度,研究降低精度LLM训练的稳定性。为此,我们提出了新的评估技术,以及一种用于量化自回归语言模型损失景观尖锐度的新指标。通过模拟浮点表示中精度的逐位降低,我们分析了表示能力与训练稳定性之间的关系,旨在为未来该领域的研究提供帮助。