The rapid adoption of Large Language Models (LLMs) has raised significant environmental concerns. Unlike the one-time cost of training, LLM inference occurs continuously at a global scale and now dominates the AI energy footprint. Yet, most sustainability studies report only coarse, model-level metrics due to the lack of fine-grained measurement methods, treating energy efficiency more as an afterthought than as a primary objective. We present the first fine-grained empirical analysis of inference energy across core components of transformer architecture. We propose a novel methodology, Component-Level Energy Assessment via Repeated sampling (CLEAR), to overcome temporal mismatch between microsecond scale component execution and monitoring of millisecond (ms) scale energy sensors. Using CLEAR, we evaluate 15 models spanning four distinct architecture types and consistently keep component-wise energy variance below 9.5\% while capturing more than 90\% of the model's total energy as individual components. Our empirical analysis reveals that Attention blocks consume significantly more energy per floating-point operation (FLOP), indicating that energy consumption is not proportionally aligned with FLOP counts. This shows that FLOPs alone fail to capture the true energy cost at a component level. Our findings establish detailed component-level energy baselines and provide insight as an initial step to build energy-efficient transformer models through component-level optimizations.
翻译:大型语言模型(LLM)的快速普及引发了显著的环境担忧。与一次性的训练成本不同,LLM推理在全球范围内持续进行,目前已成为AI能耗的主要来源。然而,由于缺乏细粒度测量方法,大多数可持续性研究仅报告粗粒度的模型级指标,将能效更多地视为事后考虑而非首要目标。我们首次对Transformer架构核心组件的推理能耗进行了细粒度实证分析。我们提出了一种新方法——通过重复采样的组件级能耗评估(CLEAR),以克服微秒级组件执行与毫秒级能耗传感器监测之间的时间不匹配问题。利用CLEAR方法,我们评估了涵盖四种不同架构类型的15个模型,在捕获模型总能耗90%以上作为独立组件的同时,持续将组件级能耗方差控制在9.5%以下。我们的实证分析表明,注意力模块在每浮点运算(FLOP)中消耗的能耗显著更高,这表明能耗与FLOP计数并不成比例对应。这证明仅凭FLOP无法准确反映组件层面的真实能耗成本。我们的研究结果建立了详细的组件级能耗基线,并通过组件级优化为构建高能效Transformer模型提供了初步的见解基础。