We present EE-LLM, a framework for large-scale training and inference of early-exit large language models (LLMs). While recent works have shown preliminary evidence for the efficacy of early exiting in accelerating LLM inference, EE-LLM makes a foundational step towards scaling up early-exit LLMs by supporting their training and inference with massive 3D parallelism. Built upon Megatron-LM, EE-LLM implements a variety of algorithmic innovations and performance optimizations tailored to early exiting, including a lightweight method that facilitates backpropagation for the early-exit training objective with pipeline parallelism, techniques of leveraging idle resources in the original pipeline schedule for computation related to early-exit layers, and two approaches of early-exit inference that are compatible with KV caching for autoregressive generation. Our analytical and empirical study shows that EE-LLM achieves great training efficiency with negligible computational overhead compared to standard LLM training, as well as outstanding inference speedup without compromising output quality. To facilitate further research and adoption, we release EE-LLM at https://github.com/pan-x-c/EE-LLM.
翻译:我们提出了EE-LLM,一个用于大规模训练和推理早退大语言模型的框架。尽管近期研究已初步证明了早退机制在加速LLM推理方面的有效性,但EE-LLM通过支持利用大规模3D并行性进行早退LLM的训练与推理,朝着规模化早退LLM迈出了基础性的一步。EE-LLM基于Megatron-LM构建,实现了一系列针对早退机制量身定制的算法创新与性能优化,包括:一种轻量级方法,可在流水线并行下促进早退训练目标的反向传播;利用原始流水线调度中空闲资源进行早退层相关计算的技术;以及两种与自回归生成中KV缓存兼容的早退推理方法。我们的分析与实证研究表明,与标准LLM训练相比,EE-LLM以可忽略的计算开销实现了极高的训练效率,并在不损害输出质量的前提下获得了出色的推理加速效果。为促进进一步研究与采用,我们在https://github.com/pan-x-c/EE-LLM开源了EE-LLM。