The shift to data-intensive processing from the cloud to the edge has introduced new challenges and expectations for the next generation of intelligent computing systems. As the memory wall continues to grow, modern systems can only meet these performance expectations by displaying data access patterns that exhibit ideal layouts in memory and ideal spatiotemporal locality in caches. However, only a few data-intensive applications are characterized by ideal locality. Instead, most applications exhibit either (i) poor locality when naively implemented and must undergo costly redesigns and tuning or (ii) inflated memory footprint to offer proper locality. To address the aforementioned challenges, we propose a hardware/software co-designed approach that can be implemented on commercially available SoC/FPGA platforms. Our approach seamlessly inserts in the CPUs' data path a Tensor Memory Engine that provides data with an ideal cache locality to running applications by (i) accessing the memory on behalf of the CPUs and (ii) composing a re-organized view of the memory layout. Unlike in- and near-memory computing approaches, it sets itself apart by clearly decoupling computing and memory accesses; computation is still performed on CPUs while the data re-organization is delegated to the Tensor Memory Engine.
翻译:从云端到边缘的数据密集型处理转型,为下一代智能计算系统带来了新的挑战与期望。随着存储墙问题日益加剧,现代系统唯有通过呈现具有理想内存布局与理想缓存时空局部性的数据访问模式,方能满足这些性能预期。然而,仅少数数据密集型应用具备理想局部性特征,多数应用则呈现以下两种情形之一:要么采用朴素实现时存在较差局部性,需经历昂贵的重新设计与调优;要么以膨胀的内存占用为代价来维持适当局部性。针对上述挑战,我们提出一种可在商用SoC/FPGA平台上实现的硬件/软件协同设计方案。该方案通过将张量存储引擎无缝嵌入CPU数据通路,一方面代表CPU执行内存访问,另一方面构造内存布局的重组视图,从而为运行中的应用提供理想缓存局部性数据。与存内计算和近存计算方案不同,本方案通过清晰解耦计算与内存访问过程独树一帜:计算仍由CPU执行,而数据重组任务则委托给张量存储引擎。