Personalized recommendation is a ubiquitous application on the internet, with many industries and hyperscalers extensively leveraging Deep Learning Recommendation Models (DLRMs) for their personalization needs (like ad serving or movie suggestions). With growing model and dataset sizes pushing computation and memory requirements, GPUs are being increasingly preferred for executing DLRM inference. However, serving newer DLRMs, while meeting acceptable latencies, continues to remain challenging, making traditional deployments increasingly more GPU-hungry, resulting in higher inference serving costs. In this paper, we show that the embedding stage continues to be the primary bottleneck in the GPU inference pipeline, leading up to a 3.2x embedding-only performance slowdown. To thoroughly grasp the problem, we conduct a detailed microarchitecture characterization and highlight the presence of low occupancy in the standard embedding kernels. By leveraging direct compiler optimizations, we achieve optimal occupancy, pushing the performance by up to 53%. Yet, long memory latency stalls continue to exist. To tackle this challenge, we propose specialized plug-and-play-based software prefetching and L2 pinning techniques, which help in hiding and decreasing the latencies. Further, we propose combining them, as they complement each other. Experimental evaluations using A100 GPUs with large models and datasets show that our proposed techniques improve performance by up to 103% for the embedding stage, and up to 77% for the overall DLRM inference pipeline.
翻译:个性化推荐是互联网上普遍存在的应用,众多行业和超大规模服务商广泛利用深度学习推荐模型(DLRM)来满足其个性化需求(例如广告投放或电影推荐)。随着模型和数据集的规模不断增长,计算和内存需求也随之提升,GPU正日益成为执行DLRM推理的首选硬件。然而,在满足可接受延迟的同时部署较新的DLRM仍然具有挑战性,导致传统部署方式对GPU的需求愈发旺盛,进而推高了推理服务成本。本文指出,嵌入阶段仍然是GPU推理流水线中的主要瓶颈,其单独性能降幅可达3.2倍。为深入理解该问题,我们进行了详细的微架构特征分析,并揭示了标准嵌入内核中存在的低占用率现象。通过直接应用编译器优化,我们实现了最佳占用率,将性能提升高达53%。然而,长内存延迟停滞问题依然存在。为应对这一挑战,我们提出了基于即插即用式软件预取和L2缓存固定的专用技术,以帮助隐藏并降低延迟。此外,鉴于这两种技术具有互补性,我们进一步提出将其结合使用。基于A100 GPU并使用大型模型与数据集的实验评估表明,我们提出的技术将嵌入阶段的性能提升最高达103%,并将整体DLRM推理流水线的性能提升最高达77%。