Weight initialization governs signal propagation and gradient flow at the start of training. This paper offers a theory-grounded and empirically validated study across two regimes: compact ReLU multilayer perceptrons and GPT-2-style transformers. First, a logarithmic sweep of the initial standard deviation maps vanishing and exploding regimes and identifies a broad stability band with standard deviations between 1e-2 and 1e-1. Second, a controlled comparison shows that Kaiming (fan-in) initialization converges faster and more stably than Xavier under ReLU, consistent with variance-preserving theory. Third, in a from-scratch 12-layer GPT-2-style model, this paper tracks layerwise Q/K/V weight variance through pretraining and observe depth-dependent equilibration into narrow bands: shallow layers expand rapidly while deeper layers change more gradually. Together, these results connect classic initialization principles with modern transformer behavior and yield simple, practical recipes for robust training.
翻译:权重初始化在训练初期主导信号传播与梯度流动。本文通过理论分析与实验验证,在两种架构下展开研究:紧凑型ReLU多层感知机与GPT-2风格Transformer。首先,通过对初始标准差的数值扫描,绘制了梯度消失与爆炸区域,并识别出标准差介于1e-2至1e-1之间的宽泛稳定带。其次,控制实验表明在ReLU激活函数下,Kaiming(fan-in)初始化比Xavier初始化收敛更快且更稳定,这与方差保持理论一致。第三,在一个从头训练的12层GPT-2风格模型中,本文追踪了预训练过程中各层Q/K/V权重的方差变化,观察到其随深度依赖性地收敛至狭窄区间:浅层权重方差快速扩张,而深层权重变化更为平缓。这些结果共同将经典初始化原则与现代Transformer行为联系起来,并为鲁棒训练提供了简洁实用的方案。