While large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a novel reParameterized training algorithm that uses Orthogonal Equivalence Transformation to optimize neurons. Specifically, POET reparameterizes each neuron with two learnable orthogonal matrices and a fixed random weight matrix. Because of its provable preservation of spectral properties of weight matrices, POET can stably optimize the objective function with improved generalization. We further develop efficient approximations that make POET flexible and scalable for training large-scale neural networks. Extensive experiments validate the effectiveness and scalability of POET in training LLMs.
翻译:尽管大语言模型(LLMs)正在推动人工智能的快速发展,但如何有效且可靠地训练这些大型模型仍然是该领域最重大的挑战之一。为应对这一挑战,我们提出了POET,一种新颖的重参数化训练算法,它利用正交等价变换来优化神经元。具体而言,POET通过两个可学习的正交矩阵和一个固定的随机权重矩阵对每个神经元进行重参数化。由于该算法能够证明保持权重矩阵的谱特性,POET能够稳定地优化目标函数并提升泛化能力。我们进一步开发了高效近似方法,使得POET能够灵活且可扩展地用于大规模神经网络的训练。大量实验验证了POET在训练LLMs中的有效性和可扩展性。