Training large AI models such as LLMs and DLRMs costs massive GPUs and computing time. The high training cost has become only affordable to big tech companies, meanwhile also causing increasing concerns about the environmental impact. This paper presents CoMERA, a Computing- and Memory-Efficient training method via Rank-Adaptive tensor optimization. CoMERA achieves rank-adaptive tensor-compressed (pre)-training via a multi-objective optimization formulation and improves the training to provide both a high compression ratio and excellent accuracy in the training process. Our optimized numerical computation (e.g., optimized tensorized embedding and tensor-network contractions) and GPU implementation eliminate part of the run-time overhead in the tensorized training on GPU. This leads to, for the first time, $2-3\times$ speedup per training epoch compared with standard training. CoMERA also outperforms the recent GaLore in terms of both memory and computing efficiency. Specifically, CoMERA is $2\times$ faster per training epoch and $9\times$ more memory-efficient than GaLore on a tested six-encoder transformer with single-batch training. Our method also shows $\sim 2\times$ speedup than standard pre-training on a BERT-like code-generation LLM while achieving $4.23\times$ compression ratio in pre-training. With further HPC optimization, CoMERA may reduce the pre-training cost of many other LLMs. An implementation of CoMERA is available at https://github.com/ziyangjoy/CoMERA.
翻译:训练大型人工智能模型(如LLMs和DLRMs)需要消耗大量GPU资源和计算时间。高昂的训练成本已使得仅有大型科技公司能够承担,同时也引发了日益增长的环境影响担忧。本文提出CoMERA,一种通过秩自适应张量优化实现的计算与内存高效训练方法。CoMERA通过多目标优化公式实现秩自适应张量压缩(预)训练,并在训练过程中同时提供高压缩比与优异精度。我们优化的数值计算(如优化的张量化嵌入与张量网络收缩)及GPU实现,消除了GPU上张量化训练的部分运行时开销。这首次实现了相比标准训练每训练周期$2-3\times$的加速。CoMERA在内存和计算效率方面也优于近期提出的GaLore方法。具体而言,在测试的六编码器Transformer单批次训练中,CoMERA每训练周期比GaLore快$2\times$,内存效率高$9\times$。我们的方法在类似BERT的代码生成LLM预训练中,相比标准预训练实现了约$2\times$加速,同时达到$4.23\times$的预训练压缩比。通过进一步的高性能计算优化,CoMERA有望降低许多其他LLMs的预训练成本。CoMERA的实现可在https://github.com/ziyangjoy/CoMERA获取。