Transformer based Large Language Models (LLMs) have been widely used in many fields, and the efficiency of LLM inference becomes hot topic in real applications. However, LLMs are usually complicatedly designed in model structure with massive operations and perform inference in the auto-regressive mode, making it a challenging task to design a system with high efficiency. In this paper, we propose an efficient LLM inference solution with low latency and high throughput. Firstly, we simplify the LLM decoder layer by fusing data movement and element-wise operations to reduce the memory access frequency and lower system latency. We also propose a segment KV cache policy to keep key/value of the request and response tokens in separate physical memory for effective device memory management, helping enlarge the runtime batch size and improve system throughput. A customized Scaled-Dot-Product-Attention kernel is designed to match our fusion policy based on the segment KV cache solution. We implement our LLM inference solution on Intel GPU and publish it publicly. Compared with the standard HuggingFace implementation, the proposed solution achieves up to 7x lower token latency and 27x higher throughput for some popular LLMs on Intel GPU.
翻译:基于Transformer架构的大语言模型已广泛应用于众多领域,其推理效率在实际应用中成为热点议题。然而,大语言模型通常具有复杂的结构设计、海量计算操作,且以自回归模式进行推理,这使得构建高效推理系统面临严峻挑战。本文提出一种兼具低延迟与高吞吐特性的大语言模型高效推理方案。首先,我们通过融合数据搬运与逐元素运算来简化大语言模型解码层,从而降低内存访问频率、减少系统延迟。同时,我们提出分段键值缓存策略,将请求与响应令牌的键/值分别存储于独立的物理内存空间,以实现高效的设备内存管理,有助于扩大运行时批处理规模并提升系统吞吐量。我们还设计了定制化的缩放点积注意力内核,以适配基于分段键值缓存方案的融合策略。本方案已在英特尔GPU平台上实现并开源发布。实验表明,相较于标准的HuggingFace实现,该方案在英特尔GPU上针对部分主流大语言模型可实现最高7倍的令牌延迟降低与27倍的吞吐量提升。