The limited HBM capacity has become the primary bottleneck for hosting an increasing number of larger-scale GPU tasks. While demand paging extends capacity via host DRAM, it incurs up to 78x slowdown due to the massive working sets and poor locality of GPU workloads. We observe, however, that GPU memory access patterns are inherently predictable via kernel launch arguments and their asynchronous execution nature. Leveraging this, we propose MSched, an OS-level scheduler that extends GPU context switching to include proactive working set preparation, thereby coalescing fragmented, eventual, and expensive page faults into a single efficient migration. MSched employs a template-based approach to predict working sets with near-perfect accuracy and proposes a co-design between task scheduler and memory manager to enforce a globally optimal page placement policy. Evaluation demonstrates that MSched outperforms demand paging by up to 11.05x for scientific and deep learning workloads, and 57.88x for LLM under memory oversubscription.
翻译:有限的HBM容量已成为承载日益增多的大规模GPU任务的主要瓶颈。虽然按需分页技术通过主机DRAM扩展了容量,但由于GPU工作负载具有海量工作集和较差的局部性,该技术会导致高达78倍的性能下降。然而我们观察到,通过内核启动参数及其异步执行特性,GPU内存访问模式本质上是可预测的。基于此,我们提出MSched——一个操作系统级调度器,它将GPU上下文切换扩展至包含主动工作集准备,从而将碎片化、延迟发生且代价高昂的缺页中断整合为一次高效的内存迁移。MSched采用基于模板的方法以接近完美的准确度预测工作集,并提出任务调度器与内存管理器的协同设计,以实施全局最优的页面放置策略。评估表明,在内存超额订阅场景下,MSched相比按需分页在科学计算和深度学习工作负载上实现最高11.05倍的性能提升,在大型语言模型上实现最高57.88倍的性能提升。