Recent advancements in large language models (LLMs) boasting billions of parameters have generated a significant demand for efficient deployment in inference workloads. The majority of existing approaches rely on temporal architectures that reuse hardware units for different network layers and operators. However, these methods often encounter challenges in achieving low latency due to considerable memory access overhead. This paper investigates the feasibility and potential of model-specific spatial acceleration for LLM inference on FPGAs. Our approach involves the specialization of distinct hardware units for specific operators or layers, facilitating direct communication between them through a dataflow architecture while minimizing off-chip memory accesses. We introduce a comprehensive analytical model for estimating the performance of a spatial LLM accelerator, taking into account the on-chip compute and memory resources available on an FPGA. Through our analysis, we can determine the scenarios in which FPGA-based spatial acceleration can outperform its GPU-based counterpart. To enable more productive implementations of an LLM model on FPGAs, we further provide a library of high-level synthesis (HLS) kernels that are composable and reusable. This library will be made available as open-source. To validate the effectiveness of both our analytical model and HLS library, we have implemented BERT and GPT2 on an AMD Alveo U280 FPGA device. Experimental results demonstrate our approach can achieve up to 13.4x speedup when compared to previous FPGA-based accelerators for the BERT model. For GPT generative inference, we attain a 2.2x speedup compared to DFX, an FPGA overlay, in the prefill stage, while achieving a 1.9x speedup and a 5.7x improvement in energy efficiency compared to the NVIDIA A100 GPU in the decode stage.
翻译:近年来,拥有数十亿参数的大语言模型(LLM)在推理工作负载中展现出巨大需求,亟需高效的部署方案。现有方法大多采用时间架构,通过复用不同网络层和算子的硬件单元实现推理。然而,这类方法常因大量内存访问开销而难以达到低延迟目标。本文研究了在FPGA上针对LLM推理采用模型特异性空间加速的可行性与潜力。我们的方法为特定算子或层定制专用硬件单元,通过数据流架构实现单元间的直接通信,同时最大限度地减少片外存储器访问。我们提出了一种全面的分析模型,用于评估空间LLM加速器的性能,该模型充分考虑了FPGA的片上计算与存储资源。通过分析,我们能够确定基于FPGA的空间加速超越其GPU对应方案的适用场景。为实现FPGA上LLM模型的高效部署,我们进一步提供了可组合与复用的高层次综合(HLS)内核库,该库将以开源形式发布。为验证分析模型与HLS内核库的有效性,我们在AMD Alveo U280 FPGA设备上实现了BERT与GPT2模型。实验结果表明,对于BERT模型,我们的方法相较于现有FPGA加速器可实现最高13.4倍的加速比。在GPT生成式推理中,预填充阶段相比FPGA覆盖层DFX获得2.2倍加速,解码阶段相比NVIDIA A100 GPU实现1.9倍加速比及5.7倍能效提升。