Emerging computing applications such as Artificial Intelligence (AI) are facing a memory wall with existing on-package memory solutions that are unable to meet the power-efficient bandwidth demands. We propose to enhance UCIe with memory semantics to deliver power-efficient bandwidth and cost-effective on-package memory solutions applicable across the entire computing continuum. We propose approaches by reusing existing LPDDR6 and HBM memory through a logic die that connects to the SoC using UCIe. We also propose an approach where the DRAM die natively supports UCIe instead of the LPDDR6 bus interface. Our approaches result in significantly higher bandwidth density (up to 10x), lower latency (up to 3x), lower power (up to 3x), and lower cost compared to existing HBM4 and LPDDR on-package memory solutions.
翻译:当前,人工智能(AI)等新兴计算应用正面临“内存墙”问题,现有的封装内存储器解决方案难以满足其高能效带宽需求。我们提出通过为UCIe增加内存语义支持,以提供适用于整个计算谱系的、高能效带宽且经济高效的封装内存储器解决方案。我们提出的方法包括:通过一颗逻辑芯片复用现有的LPDDR6与HBM内存,该逻辑芯片通过UCIe与SoC连接;以及一种让DRAM裸片原生支持UCIe(而非LPDDR6总线接口)的方案。与现有的HBM4及LPDDR封装内存储器解决方案相比,我们的方案能显著实现更高的带宽密度(最高达10倍)、更低的延迟(最高达3倍)、更低的功耗(最高达3倍)以及更低的成本。