The rapid expansion of Transformer-based large language models has dramatically increased the need for high-performance GPUs. As a result, there is growing demand for fast, accurate, and widely generalizable GPU performance models to support next-generation hardware selection and system-level exploration. However, current data-driven methods are limited, exhibiting poor generalization across hardware and inadequate modeling of complex production-level kernels common in modern inference stacks. To address these issues, we present SyncPerf, a unified GPU modeling framework. This approach first employs an analytical model to quantify a given kernel's demands on the GPU's heterogeneous instruction pipelines. These analytical features are then fed into a machine learning (ML) model to capture complex cross-pipeline interactions and resource dependencies, enabling high-fidelity performance prediction. Our evaluation across 11 GPU types from four generations of major architectures on two widely-used serving systems demonstrates that SyncPerf delivers high fidelity and strong generalizability. It achieves accurate predictions, with only 6.1% average error at the kernel level and 8.5% for end-to-end inference -- reducing the error of state-of-the-art methods by 6.7x and 4.4x, respectively. We also demonstrate SynPerf's value "beyond simulation" by utilizing its performance ceiling to diagnose implementation shortcomings and guide the optimization of a production fused MoE Triton kernel, achieving up to 1.7x speedup.
翻译:基于Transformer的大型语言模型的快速扩展极大地增加了对高性能GPU的需求。因此,业界对快速、准确且具有广泛泛化能力的GPU性能模型的需求日益增长,以支持下一代硬件选择和系统级探索。然而,当前的数据驱动方法存在局限性,表现为跨硬件泛化能力差,以及对现代推理堆栈中常见的复杂生产级内核建模不足。为了解决这些问题,我们提出了SynPerf,一个统一的GPU建模框架。该方法首先采用一个解析模型来量化给定内核对GPU异构指令流水线的需求。然后,这些解析特征被输入到一个机器学习(ML)模型中,以捕捉复杂的跨流水线交互和资源依赖性,从而实现高保真度的性能预测。我们在来自四代主流架构的11种GPU类型上,对两个广泛使用的服务系统进行评估,结果表明SynPerf具有高保真度和强大的泛化能力。它实现了准确的预测,内核级平均误差仅为6.1%,端到端推理误差为8.5%——分别将最先进方法的误差降低了6.7倍和4.4倍。我们还通过利用其性能上限来诊断实现缺陷并指导一个生产级融合MoE Triton内核的优化,实现了高达1.7倍的加速,从而展示了SynPerf"超越模拟"的价值。