While large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a novel reParameterized training algorithm that uses Orthogonal Equivalence Transformation to optimize neurons. Specifically, POET reparameterizes each neuron with two learnable orthogonal matrices and a fixed random weight matrix. Because of its provable preservation of spectral properties of weight matrices, POET can stably optimize the objective function with improved generalization. We further develop efficient approximations that make POET flexible and scalable for training large-scale neural networks. Extensive experiments validate the effectiveness and scalability of POET in training LLMs.
翻译:尽管大语言模型(LLM)正驱动人工智能的快速发展,如何高效可靠地训练这些大规模模型仍是该领域最重大的挑战之一。为应对这一挑战,我们提出POET——一种新颖的重参数化训练算法,其通过正交等价变换来优化神经元。具体而言,POET使用两个可学习的正交矩阵和一个固定的随机权重矩阵对每个神经元进行重参数化。由于该算法可证明保持权重矩阵的谱特性,POET能够稳定地优化目标函数并提升泛化性能。我们进一步开发了高效近似方法,使POET能够灵活扩展至大规模神经网络训练。大量实验验证了POET在训练LLM时的有效性与可扩展性。