Parameter-efficient finetuning (PEFT) is a widely used technique to adapt large language models for different tasks. Service providers typically create separate systems for users to perform PEFT model finetuning and inference tasks. This is because existing systems cannot handle workloads that include a mix of inference and PEFT finetuning requests. As a result, shared GPU resources are underutilized, leading to inefficiencies. To address this problem, we present FlexLLM, the first system that can serve inference and parameter-efficient finetuning requests in the same iteration. Our system leverages the complementary nature of these two tasks and utilizes shared GPU resources to run them jointly, using a method called co-serving. To achieve this, FlexLLM introduces a novel token-level finetuning mechanism, which breaks down the finetuning computation of a sequence into smaller token-level computations and uses dependent parallelization and graph pruning, two static compilation optimizations, to minimize the memory overhead and latency for co-serving. Compared to existing systems, FlexLLM's co-serving approach reduces the activation GPU memory overhead by up to 8x, and the end-to-end GPU memory requirement of finetuning by up to 36% while maintaining a low inference latency and improving finetuning throughput. For example, under a heavy inference workload, FlexLLM can still preserve more than 80% of the peak finetuning throughput, whereas existing systems cannot make any progress with finetuning. The source code of FlexLLM is publicly available at https://github.com/flexflow/FlexFlow.
翻译:参数高效微调(PEFT)是一种广泛用于将大语言模型适配至不同任务的技术。服务提供商通常为用户构建独立的系统,分别处理PEFT模型微调和推理任务。这是因为现有系统无法处理包含混合推理与PEFT微调请求的工作负载,导致共享GPU资源利用率低下,引发效率问题。为解决这一问题,我们提出FlexLLM——首个可在同一迭代中同时服务推理与参数高效微调请求的系统。该系统利用两种任务的互补特性,通过名为“共服务”的方法共享GPU资源协同运行两者。为此,FlexLLM引入一种新颖的令牌级微调机制,将序列的微调计算分解为更细粒度的令牌级计算,并结合两种静态编译优化技术——依赖并行化与图剪枝——以最小化共服务的内存开销和延迟。与现有系统相比,FlexLLM的共服务方法将激活值GPU内存开销降低至多8倍,微调端到端GPU内存需求降低至多36%,同时保持低推理延迟并提升微调吞吐量。例如,在高推理负载下,FlexLLM仍能保留超过80%的峰值微调吞吐量,而现有系统完全无法进行微调。FlexLLM的源代码已在https://github.com/flexflow/FlexFlow 公开。