As Large Language Models (LLMs) continue to evolve, Mixture of Experts (MoE) architecture has emerged as a prevailing design for achieving state-of-the-art performance across a wide range of tasks. MoE models use sparse gating to activate only a handful of expert sub-networks per input, achieving billion-parameter capacity with inference costs akin to much smaller models. However, such models often pose challenges for hardware deployment due to the massive data volume introduced by the MoE layers. To address the challenges of serving MoE models, we propose Stratum, a system-hardware co-design approach that combines the novel memory technology Monolithic 3D-Stackable DRAM (Mono3D DRAM), near-memory processing (NMP), and GPU acceleration. The logic and Mono3D DRAM dies are connected through hybrid bonding, whereas the Mono3D DRAM stack and GPU are interconnected via silicon interposer. Mono3D DRAM offers higher internal bandwidth than HBM thanks to the dense vertical interconnect pitch enabled by its monolithic structure, which supports implementations of higher-performance near-memory processing. Furthermore, we tackle the latency differences introduced by aggressive vertical scaling of Mono3D DRAM along the z-dimension by constructing internal memory tiers and assigning data across layers based on access likelihood, guided by topic-based expert usage prediction to boost NMP throughput. The Stratum system achieves up to 8.29x improvement in decoding throughput and 7.66x better energy efficiency across various benchmarks compared to GPU baselines.
翻译:随着大型语言模型(LLM)的持续演进,混合专家(MoE)架构已成为在广泛任务中实现最先进性能的主流设计。MoE模型通过稀疏门控机制,仅针对每个输入激活少量专家子网络,从而在保持接近较小模型推理成本的同时实现数十亿参数规模。然而,此类模型因MoE层引入的海量数据往往给硬件部署带来挑战。为应对MoE模型推理的挑战,本文提出Stratum——一种融合新型存储技术单片三维堆叠DRAM(Mono3D DRAM)、近内存处理(NMP)与GPU加速的系统-硬件协同设计方案。逻辑芯片与Mono3D DRAM芯片通过混合键合互连,而Mono3D DRAM堆栈与GPU则通过硅中介层连接。得益于其单片结构实现的高密度垂直互连间距,Mono3D DRAM具备比HBM更高的内部带宽,为高性能近内存处理提供了硬件基础。此外,针对Mono3D DRAM沿z轴激进垂直缩放带来的延迟差异,我们构建了内部存储层级,并基于主题驱动的专家使用预测机制,依据数据访问概率将其分配至不同存储层,从而提升NMP吞吐量。实验表明,相较于GPU基线系统,Stratum在多种基准测试中实现了最高8.29倍的解码吞吐量提升与7.66倍的能效优化。