Recent nano-technological advances enable the Monolithic 3D (M3D) integration of multiple memory and logic layers in a single chip, allowing for fine-grained connections between layers and significantly alleviating main memory bottlenecks. We show for a variety of workloads, on a state-of-the-art M3D-based system, that the performance and energy bottlenecks shift from main memory to the processor core and cache hierarchy. Therefore, there is a need to revisit current designs that have been conventionally tailored to tackle the memory bottleneck. Based on the insights from our design space exploration, we propose RevaMp3D, introducing five key changes. First, we propose removing the shared last-level cache, as this delivers speedups comparable to or exceeding those from increasing its size or reducing its latency across all workloads. Second, since improving L1 cache latency has a large impact on performance, we reduce L1 latency by leveraging an M3D layout to shorten its wires. Third, we repurpose the area from the removed cache to widen and scale up pipeline structures, accommodating more in-flight requests that are efficiently served by M3D memory. To avoid latency penalties from these larger structures, we leverage M3D layouts. Fourth, to facilitate high thread-level parallelism, we propose a new fine-grained synchronization technique, using M3D's dense inter-layer connectivity. Fifth, we leverage the M3D main memory to mitigate the core bottlenecks. We propose a processor frontend design that memoizes the repetitive fetched, decoded, and reordered instructions, stores them in main memory, and turns off the relevant parts of the core when possible. RevaMp3D provides 1.2x-2.9x speedup and 1.2x-1.4x energy reduction compared to a state-of-the-art M3D system. We also analyze RevaMp3D's design decisions across various memory latencies to facilitate latency-aware design decisions.
翻译:近期的纳米技术进步使得在单芯片内实现多层存储器与逻辑层的单片三维(M3D)集成成为可能,该技术允许层间进行细粒度互连,从而显著缓解了主存瓶颈。我们在基于先进M3D技术的系统上针对多种工作负载进行研究表明,性能和能耗瓶颈已从主存转移至处理器核与缓存层次结构。因此,有必要重新审视当前主要针对内存瓶颈进行优化的传统设计。基于设计空间探索的深入分析,我们提出了RevaMp3D架构,引入了五项关键改进。首先,我们提出移除共享末级缓存,因为这一改动在所有工作负载中带来的加速效果,相当于或超过了增加其容量或降低其延迟所能获得的收益。其次,鉴于降低L1缓存延迟对性能有显著影响,我们利用M3D布局缩短其布线以降低L1延迟。第三,我们将移除缓存所释放的芯片面积用于拓宽和扩展流水线结构,从而容纳更多由M3D存储器高效服务的飞行中请求。为避免这些更大结构带来的延迟开销,我们同样利用了M3D布局优势。第四,为实现高线程级并行度,我们提出一种利用M3D密集层间连接特性的新型细粒度同步技术。第五,我们借助M3D主存来缓解核心瓶颈,提出一种处理器前端设计,该设计可记忆重复的取指、译码及重排序指令,将其存储于主存中,并在可能时关闭核心的相关部件。与先进的M3D系统相比,RevaMp3D实现了1.2倍至2.9倍的加速和1.2倍至1.4倍的能耗降低。我们还分析了RevaMp3D在不同内存延迟下的设计决策,以支持基于延迟感知的设计选择。