Modern AI acceleration faces a fundamental challenge: conventional assumptions about memory requirements, batching effectiveness, and latency-throughput tradeoffs are systemwide generalizations that ignore the heterogeneous computational patterns of individual neural network operators. However, going towards network-level customization and operator-level heterogeneity incur substantial Non-Recurring Engineering (NRE) costs. While chiplet-based approaches have been proposed to amortize NRE costs, reuse opportunities remain limited without carefully identifying which chiplets are truly necessary. This paper introduces Mozart, a chiplet ecosystem and accelerator codesign framework that systematically constructs low cost bespoke application-specific integrated circuits (BASICs). BASICs leverage operator-level disaggregation to explore chiplet and memory heterogeneity, tensor fusion, and tensor parallelism, with place-and-route validation ensuring physical implementability. The framework also enables constraint-aware system-level optimization across deployment contexts ranging from datacenter inference serving to edge computing in autonomous vehicles. The evaluation confirms that with just 8 strategically selected chiplets, Mozart-generated composite BASICs achieve 43.5%, 25.4%, 67.7%, and 78.8% reductions in energy, energy-cost product, energy-delay product (EDP), and energy-delay-cost product compared to traditional homogeneous accelerators. For datacenter LLM serving, Mozart achieves 15-19% energy reduction and 35-39% energy-cost improvement. In speculative decoding, Mozart delivers throughput improvements of 24.6-58.6% while reducing energy consumption by 38.6-45.6%. For autonomous vehicle perception, Mozart reduces energy-cost by 25.54% and energy by 10.53% under real-time constraints.
翻译:现代人工智能加速面临一个根本性挑战:关于内存需求、批处理有效性以及延迟-吞吐量权衡的传统假设是系统层面的泛化,忽略了单个神经网络算子异构的计算模式。然而,走向网络级定制和算子级异构会带来高昂的非重复性工程成本。虽然已有基于小芯片的方法被提出以分摊非重复性工程成本,但若未能仔细识别哪些小芯片是真正必需的,其复用机会仍然有限。本文介绍Mozart,一种小芯片生态系统与加速器协同设计框架,可系统性地构建低成本的定制专用集成电路。定制专用集成电路利用算子级解耦来探索小芯片与内存的异构性、张量融合以及张量并行,并通过布局布线验证确保物理可实现性。该框架还支持在从数据中心推理服务到自动驾驶边缘计算等不同部署场景下进行约束感知的系统级优化。评估证实,仅需8个经策略性选择的小芯片,Mozart生成的复合定制专用集成电路相较于传统同构加速器,在能耗、能耗-成本乘积、能耗-延迟乘积以及能耗-延迟-成本乘积上分别实现了43.5%、25.4%、67.7%和78.8%的降低。对于数据中心大语言模型服务,Mozart实现了15-19%的能耗降低和35-39%的能耗-成本改善。在推测解码中,Mozart带来了24.6-58.6%的吞吐量提升,同时将能耗降低了38.6-45.6%。对于自动驾驶车辆感知,在实时约束下,Mozart将能耗-成本降低了25.54%,能耗降低了10.53%。