Low-precision training is critical for optimizing the trade-off between model quality and training costs, necessitating the joint allocation of model size, dataset size, and numerical precision. While empirical scaling laws suggest that quantization impacts effective model and data capacities or acts as an additive error, the theoretical mechanisms governing these effects remain largely unexplored. In this work, we initiate a theoretical study of scaling laws for low-precision training within a high-dimensional sketched linear regression framework. By analyzing multiplicative (signal-dependent) and additive (signal-independent) quantization, we identify a critical dichotomy in their scaling behaviors. Our analysis reveals that while both schemes introduce an additive error and degrade the effective data size, they exhibit distinct effects on effective model size: multiplicative quantization maintains the full-precision model size, whereas additive quantization reduces the effective model size. Numerical experiments validate our theoretical findings. By rigorously characterizing the complex interplay among model scale, dataset size, and quantization error, our work provides a principled theoretical basis for optimizing training protocols under practical hardware constraints.
翻译:低精度训练对于优化模型质量与训练成本之间的权衡至关重要,这需要对模型规模、数据集规模和数值精度进行联合分配。尽管经验缩放规律表明量化会影响有效模型和数据容量,或表现为一种加性误差,但支配这些效应的理论机制在很大程度上仍未得到探索。在本工作中,我们首次在高维草图线性回归框架内对低精度训练的缩放规律展开理论研究。通过分析乘性(信号相关)和加性(信号无关)量化,我们发现了二者缩放行为中的关键二分性。我们的分析表明,虽然两种方案都会引入加性误差并降低有效数据规模,但它们对有效模型规模的影响却截然不同:乘性量化保持了全精度模型规模,而加性量化则降低了有效模型规模。数值实验验证了我们的理论发现。通过严格刻画模型规模、数据集规模和量化误差之间复杂的相互作用,我们的工作为在实际硬件约束下优化训练方案提供了原则性的理论基础。