Large-scale computing systems are increasingly using accelerators such as GPUs to enable peta- and exa-scale levels of compute to meet the needs of Machine Learning (ML) and scientific computing applications. Given the widespread and growing use of ML, including in some scientific applications, optimizing these clusters for ML workloads is particularly important. However, recent work has demonstrated that accelerators in these clusters can suffer from performance variability and this variability can lead to resource under-utilization and load imbalance. In this work we focus on how clusters schedulers, which are used to share accelerator-rich clusters across many concurrent ML jobs, can embrace performance variability to mitigate its effects. Our key insight to address this challenge is to characterize which applications are more likely to suffer from performance variability and take that into account while placing jobs on the cluster. We design a novel cluster scheduler, PAL, which uses performance variability measurements and application-specific profiles to improve job performance and resource utilization. PAL also balances performance variability with locality to ensure jobs are spread across as few nodes as possible. Overall, PAL significantly improves GPU-rich cluster scheduling: across traces for six ML workload applications spanning image, language, and vision models with a variety of variability profiles, PAL improves geomean job completion time by 42%, cluster utilization by 28%, and makespan by 47% over existing state-of-the-art schedulers.
翻译:大规模计算系统正日益采用GPU等加速器,以实现千兆乃至百亿亿次级别的计算能力,以满足机器学习(ML)和科学计算应用的需求。鉴于机器学习(包括部分科学应用)的广泛且不断增长的应用,针对机器学习工作负载优化这些集群显得尤为重要。然而,近期研究表明,此类集群中的加速器可能面临性能可变性问题,这种可变性可能导致资源利用不足和负载不均衡。本研究聚焦于集群调度器(用于在众多并发机器学习任务间共享富含加速器的集群)如何接纳性能可变性以缓解其影响。应对这一挑战的核心思路在于:识别哪些应用更易受性能可变性影响,并在集群中部署任务时将此因素纳入考量。我们设计了一种新型集群调度器PAL,它利用性能可变性测量结果和特定于应用的性能画像来提升任务执行效率和资源利用率。PAL还在性能可变性与数据局部性之间取得平衡,确保任务尽可能集中在最少节点上执行。总体而言,PAL显著提升了富含GPU集群的调度性能:在涵盖图像、语言和视觉模型的六类机器学习工作负载应用(具有多样化可变性特征)的追踪测试中,相较于现有最先进的调度器,PAL将任务完成时间的几何平均值提升了42%,集群利用率提高了28%,总完工时间缩短了47%。