Offline LLM inference seeks to maximize request processing under fixed budgets, making commodity GPU servers a promising choice. However, prior work typically considers offloading and parallelism in isolation, resulting in suboptimal performance. In this paper, we propose PipeMax, a high-throughput LLM inference system that integrates pipeline parallelism with offloading to overcome interconnect and memory constraints on GPU servers. Particularly, pipeline parallelism naturally incurs low communication overhead and keeps only one batch active on each GPU at a time, which enables offloading the KV cache of inactive batches. By coordinating computation with offloading data movement, PipeMax effectively expands GPU memory capacity and sustains large-batch execution. Experiments show that PipeMax achieves up to 2.51x higher throughput than vLLM, and up to 1.42x and 1.38x higher throughput than state-of-the-art high-throughput LLM systems, respectively, on an 8-GPU node.
翻译:暂无翻译