Transformers have become the de facto architecture for a wide range of machine learning tasks, particularly in large language models (LLMs). Despite their remarkable performance, challenges remain in training deep transformer networks, especially regarding the location of layer normalization. While Pre-Norm structures facilitate easier training due to their more prominent identity path, they often yield suboptimal performance compared to Post-Norm. In this paper, we propose $\textbf{HybridNorm}$, a straightforward yet effective hybrid normalization strategy that integrates the advantages of both Pre-Norm and Post-Norm approaches. Specifically, HybridNorm employs QKV normalization within the attention mechanism and Post-Norm in the feed-forward network (FFN) of each transformer block. This design not only stabilizes training but also enhances performance, particularly in the context of LLMs. Comprehensive experiments in both dense and sparse architectures show that HybridNorm consistently outperforms both Pre-Norm and Post-Norm approaches, achieving state-of-the-art results across various benchmarks. These findings highlight the potential of HybridNorm as a more stable and effective technique for improving the training and performance of deep transformer models. %Code will be made publicly available. Code is available at https://github.com/BryceZhuo/HybridNorm.
翻译:Transformer已成为众多机器学习任务(尤其是大型语言模型)的事实标准架构。尽管其性能卓越,但深度Transformer网络的训练仍面临挑战,特别是层归一化的放置位置问题。虽然Pre-Norm结构因其更显著的恒等路径而便于训练,但其性能通常逊于Post-Norm。本文提出$\textbf{HybridNorm}$——一种简单而有效的混合归一化策略,它融合了Pre-Norm与Post-Norm方法的优势。具体而言,HybridNorm在注意力机制中采用QKV归一化,并在每个Transformer块的前馈网络中使用Post-Norm。该设计不仅稳定了训练过程,还提升了模型性能,尤其在大型语言模型场景中表现突出。在稠密与稀疏架构上的全面实验表明,HybridNorm始终优于Pre-Norm与Post-Norm方法,在多项基准测试中取得了最先进的结果。这些发现彰显了HybridNorm作为一种更稳定、更有效的技术,在改进深度Transformer模型训练与性能方面的潜力。代码已公开于https://github.com/BryceZhuo/HybridNorm。