Large language models (LLMs) have demonstrated remarkable capabilities, but their adoption is limited by high computational costs during inference. While increasing parameter counts enhances accuracy, it also widens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a framework to accelerate LLM inference on specific hardware while preserving their capabilities. Through an innovative application of neural architecture search (NAS) at an unprecedented scale, Puzzle systematically optimizes models with tens of billions of parameters under hardware constraints. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We demonstrate the real-world impact of our framework through Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while preserving 98.4% of the original model's capabilities. Nemotron-51B currently stands as the most accurate language model capable of inference on a single GPU with large batch sizes. Remarkably, this transformation required just 45B training tokens, compared to over 15T tokens used for the 70B model it was derived from. This establishes a new paradigm where powerful models can be optimized for efficient deployment with only negligible compromise of their capabilities, demonstrating that inference performance, not parameter count alone, should guide model selection. With the release of Nemotron-51B and the presentation of the Puzzle framework, we provide practitioners immediate access to state-of-the-art language modeling capabilities at significantly reduced computational costs.
翻译:大语言模型(LLM)已展现出卓越的能力,但其高昂的推理计算成本限制了实际应用。虽然增加参数量能提升准确性,但也进一步拉大了最先进能力与实用部署性之间的差距。我们提出Puzzle框架,旨在特定硬件上加速LLM推理的同时保持其能力。通过在前所未有的规模上创新性地应用神经架构搜索(NAS),Puzzle在硬件约束下系统化地优化了具有数百亿参数的模型。我们的方法利用块级局部知识蒸馏(BLD)进行并行架构探索,并采用混合整数规划实现精确的约束优化。我们通过公开可用的Llama-3.1-Nemotron-51B-Instruct(Nemotron-51B)模型展示了该框架的实际影响,该模型源自Llama-3.1-70B-Instruct。Nemotron-51B实现了2.17倍的推理吞吐量加速,可适配单张NVIDIA H100 GPU,同时保留了原模型98.4%的能力。Nemotron-51B是目前能够在单GPU上以大批次进行推理的最准确语言模型。值得注意的是,此转换仅需450亿训练词元,而其所源自的700亿参数模型训练消耗超过15万亿词元。这确立了一种新范式:强大模型可通过极小的能力妥协实现高效部署优化,证明模型选择应基于推理性能而非仅参数数量。通过发布Nemotron-51B并呈现Puzzle框架,我们为实践者提供了以显著降低的计算成本即时获取最先进语言建模能力的途径。