Hosting diverse large language model workloads in a unified resource pool through co-location is cost-effective. For example, long-running chat services generally follow diurnal traffic patterns, which inspire co-location of batch jobs to fulfill resource valleys between successive peaks, and thus to saturate resource allocation in cluster-wide scope. These heterogeneous workloads often have different business priorities, and therefore preemption can be leveraged for resource elasticity. However, workloads often have distinct topology preferences as well. The resources released by lower-priority instances may fail to meet the requirements of high-priority online services which are usually latency-sensitive. The root cause behind such mis-match is a lack of topology awareness of resource scheduler, especially during preemption. To bridge this gap, we develop a fine-grained topology-aware method for preemptive scheduling of hybrid workloads. The method ensures that the resources freed by preempted tasks adhere to the topological affinity needs of high-priority preemptors in a guaranteed or best-effort manner. This dynamic alignment significantly increases the efficiency of preemption and improves overall scheduled performance for LLM workloads by $55\%$.
翻译:通过共置在统一资源池中托管多样化的大型语言模型负载具有成本效益。例如,长时间运行的聊天服务通常遵循昼夜流量模式,这启发我们通过共置批处理作业来填补连续峰值间的资源低谷,从而实现集群范围内资源分配的饱和。这些异构负载通常具有不同的业务优先级,因此可利用抢占机制实现资源弹性。然而,负载往往还具有不同的拓扑偏好。低优先级实例释放的资源可能无法满足通常对延迟敏感的高优先级在线服务的需求。这种不匹配的根本原因在于资源调度器缺乏拓扑感知能力,尤其是在抢占过程中。为弥补这一缺陷,我们开发了一种细粒度的拓扑感知方法,用于混合负载的抢占式调度。该方法通过保证或尽力而为的方式,确保被抢占任务释放的资源符合高优先级抢占者的拓扑亲和性需求。这种动态对齐显著提高了抢占效率,并将LLM负载的整体调度性能提升了55%。