Generalization abilities of well-trained large language models (LLMs) are known to scale predictably as a function of model size. In contrast to the existence of practical scaling laws governing pre-training, the quality of LLMs after post-training compression remains highly unpredictable, often requiring case-by-case validation in practice. In this work, we attempted to close this gap for post-training weight quantization of LLMs by conducting a systematic empirical study on multiple LLM families quantized to numerous low-precision tensor data types using popular weight quantization techniques. We identified key scaling factors pertaining to characteristics of the local loss landscape, based on which the performance of quantized LLMs can be reasonably well predicted by a statistical model.
翻译:众所周知,经过良好训练的大型语言模型(LLMs)的泛化能力会随着模型规模的增大而呈现可预测的缩放趋势。与存在实际指导预训练的缩放定律不同,训练后压缩的LLMs质量仍然高度不可预测,在实践中通常需要逐案例验证。在本工作中,我们试图为LLMs的训练后权重量化填补这一空白,通过对多个LLM系列进行系统的实证研究,使用流行的权重量化技术将其量化为多种低精度张量数据类型。我们识别了与局部损失景观特征相关的关键缩放因子,基于这些因子,量化LLMs的性能可以通过一个统计模型得到相当好的预测。