We present EE-LLM, a framework for large-scale training and inference of early-exit large language models (LLMs). While recent works have shown preliminary evidence for the efficacy of early exiting in accelerating LLM inference, EE-LLM makes a foundational step towards scaling up early-exit LLMs by supporting their training and inference with massive 3D parallelism. Built upon Megatron-LM, EE-LLM implements a variety of algorithmic innovations and performance optimizations tailored to early exiting, including a lightweight method that facilitates backpropagation for the early-exit training objective with pipeline parallelism, techniques of leveraging idle resources in the original pipeline schedule for computation related to early-exit layers, and two approaches of early-exit inference that are compatible with KV caching for autoregressive generation. Our analytical and empirical study shows that EE-LLM achieves great training efficiency with negligible computational overhead compared to standard LLM training, as well as outstanding inference speedup without compromising output quality. To facilitate further research and adoption, we release EE-LLM at https://github.com/pan-x-c/EE-LLM.
翻译:我们提出EE-LLM框架,用于大规模训练和推理早期退出大语言模型(LLMs)。尽管近期研究已初步证明早期退出在加速LLM推理方面的有效性,但EE-LLM通过支持基于大规模3D并行的训练与推理,为扩展早期退出LLM奠定了根基性基础。基于Megatron-LM构建的EE-LLM实现了多种针对早期退出设计的算法创新与性能优化,包括:利用流水线并行实现早期退出训练目标反向传播的轻量化方法、在原始流水线调度中利用空闲资源处理早期退出层相关计算的技术,以及两种与自回归生成键值缓存兼容的早期退出推理方法。我们的分析与实证研究表明,EE-LLM在标准LLM训练基础上仅引入可忽略的计算开销即可实现卓越的训练效率,并在不降低输出质量的前提下取得显著的推理加速效果。为促进进一步研究与推广,我们已在https://github.com/pan-x-c/EE-LLM 开源EE-LLM。