How many tokens can a GPU inference cluster deliver per watt? Across deployments of identical hardware, the answer varies by 40x -- not because of software inefficiency, but because of the serving context window. We derive the 1/W law: tokens per watt halves every time the context window doubles. A larger context window shrinks the KV-cache concurrency limit while leaving GPU power draw roughly unchanged. At 64K context, an H100 holds 16 sequences in flight (tok/W = 1.5); at 4K context, the same H100 holds 256 sequences (tok/W = 17.6). Routing topology -- which determines the effective context window each GPU services -- is a more powerful energy lever than buying newer hardware. Working from published H100 power measurements, a calibrated logistic power model, and a roofline throughput model, we derive these results analytically using the inference-fleet-sim framework; no new hardware experiments were conducted. Two-pool context-length routing (FleetOpt) delivers roughly 2.5x better tok/W over a homogeneous fleet, while upgrading from H100 to B200 delivers roughly 1.7x. The gains are independent: combining FleetOpt with B200 yields 4.25x over the H100 homogeneous baseline. B200/H200 numbers are analytical projections (+-20% uncertainty); H100 results are calibrated to published measurements. For MoE models, active-parameter weight streaming adds a third lever. Qwen3-235B-A22B (22B active) reaches roughly 37.8 tok/W at 8K context on H100 -- 5.1x better than Llama-3.1-70B -- because decode time scales with activated weights, not total parameters. MoE dispatch overhead is excluded, so this is an upper bound.
翻译:GPU推理集群每瓦特能产生多少令牌?在相同硬件的部署中,该指标存在高达40倍的差异——这并非源于软件低效,而是由服务上下文窗口所致。我们推导出1/W定律:每当上下文窗口翻倍,每瓦特令牌数减半。更大的上下文窗口会缩减KV缓存并发上限,而GPU功耗大致保持不变。在64K上下文下,单个H100可维持16个并发序列(tok/W = 1.5);在4K上下文下,相同H100可维持256个并发序列(tok/W = 17.6)。路由拓扑——决定每个GPU实际服务的上下文窗口——是比购置新硬件更强大的能耗调节杠杆。基于已发布的H100功耗测量数据、经校准的逻辑斯蒂功耗模型及屋顶线吞吐模型,我们使用inference-fleet-sim框架解析推导出这些结论;未进行新的硬件实验。双池上下文长度路由(FleetOpt)相较于同构集群可提升约2.5倍tok/W,而从H100升级至B200仅提升约1.7倍。这两类增益相互独立:结合FleetOpt与B200可在H100同构基线基础上实现4.25倍提升。B200/H200数据为解析预测值(±20%不确定度);H100结果已根据已发布测量数据校准。对于MoE模型,激活参数权重流式传输构成了第三重调节杠杆。Qwen3-235B-A22B(激活参数220亿)在H100上以8K上下文达到约37.8 tok/W——较Llama-3.1-70B提升5.1倍——因为解码时间与激活权重而非总参数量成比例。此计算未计入MoE调度开销,故为理论上限值。