Artificial Intelligence (AI) applications, such as Large Language Models, are primarily driven and executed by Graphics Processing Units (GPUs). These GPU programs (kernels) consume substantial amounts of energy, yet software developers often lack the hardware expertise and ad hoc knowledge required to optimize for power efficiency. We propose FlipFlop, a framework using static code analysis to predict energy consumption and recommend Pareto-optimal thread block configurations considering both power consumption and execution time. Our framework requires no runtime execution and analyzes PTX code, a low-level instruction set for CUDA-enabled GPUs. It is validated across a diverse set of GPUs and kernels, including multi-head attention, convolution, and matrix multiplication. FlipFlop achieves 83% accuracy in identifying locally optimal energy-efficient configurations, while also minimizing developer effort by reducing the optimization search space by 93.4%. For multi-head attention kernels, it yields up to 79% energy savings and 106% throughput gains relative to NVIDIA's occupancy heuristic. By integrating static analysis with real-time monitoring and providing explainable optimization guidance, FlipFlop empowers developers to create sustainable, high-performance GPU software which minimizes environmental and computational costs.
翻译:人工智能(AI)应用(例如大语言模型)主要由图形处理器(GPU)驱动和执行。这些GPU程序(内核)消耗大量能源,但软件开发者通常缺乏优化能效所需的硬件专业知识与特定领域知识。本文提出FlipFlop框架,该框架利用静态代码分析预测能耗,并在考虑功耗与执行时间的前提下推荐帕累托最优的线程块配置。本框架无需运行时执行,可直接分析PTX代码(一种适用于CUDA GPU的低级指令集)。该框架在多种GPU和内核(包括多头注意力机制、卷积运算和矩阵乘法)上得到验证。FlipFlop在识别局部最优能效配置方面达到83%的准确率,同时通过将优化搜索空间缩减93.4%显著降低开发者工作量。对于多头注意力内核,相较于英伟达的占用率启发式方法,该框架最高可实现79%的节能效果与106%的吞吐量提升。通过将静态分析与实时监控相结合并提供可解释的优化指导,FlipFlop赋能开发者构建可持续的高性能GPU软件,从而最大限度降低环境与计算成本。