State-of-the-art cloud-native applications require intelligent schedulers that can effectively balance system stability, resource utilisation, and associated costs. While Kubernetes provides feasibility-based placement by default, recent research efforts have explored the use of reinforcement learning (RL) for more intelligent scheduling decisions. However, current RL-based schedulers have three major limitations. First, most of these schedulers use monolithic centralised agents, which are non-scalable for large heterogeneous clusters. Second, the ones that use multi-objective reward functions assume simple, static, linear combinations of the objectives. Third, no previous work has produced a stress-aware scheduler that can react adaptively to dynamic conditions. To address these gaps in current research, we propose the Adaptive Graph-enhanced Multi-Agent Reinforcement Learning Dynamic Kubernetes Scheduler (AGMARL-DKS). AGMARL-DKS addresses these gaps by introducing three major innovations. First, we construct a scalable solution by treating the scheduling challenge as a cooperative multi-agent problem, where every cluster node operates as an agent, employing centralised training methods before decentralised execution. Second, to be context-aware and yet decentralised, we use a Graph Neural Network (GNN) to build a state representation of the global cluster context at each agent. This represents an improvement over methods that rely solely on local observations. Finally, to make trade-offs between these objectives, we use a stress-aware lexicographical ordering policy instead of a simple, static linear weighting of these objectives. The evaluations in Google Kubernetes Engine (GKE) reveal that AGMARL-DKS significantly outperforms the default scheduler in terms of fault tolerance, utilisation, and cost, especially in scheduling batch and mission-critical workloads.
翻译:最先进的云原生应用需要能够有效平衡系统稳定性、资源利用率和相关成本的智能调度器。尽管Kubernetes默认提供基于可行性的放置策略,但近期研究已探索使用强化学习(RL)来实现更智能的调度决策。然而,当前基于RL的调度器存在三个主要局限。首先,大多数此类调度器采用单一集中式智能体,难以扩展至大型异构集群。其次,采用多目标奖励函数的调度器通常假设目标之间是简单、静态的线性组合关系。第三,尚无先前研究能够开发出可自适应响应动态条件的压力感知调度器。为弥补当前研究的这些不足,我们提出了自适应图增强多智能体强化学习动态Kubernetes调度器(AGMARL-DKS)。AGMARL-DKS通过引入三项主要创新来解决上述问题。首先,我们将调度挑战构建为协作式多智能体问题,每个集群节点作为独立智能体运行,采用先集中训练后分散执行的模式,从而形成可扩展的解决方案。其次,为实现上下文感知且保持去中心化特性,我们使用图神经网络(GNN)在每个智能体处构建全局集群上下文的状态表征,这相较于仅依赖局部观测的方法有所改进。最后,为在这些目标间进行权衡,我们采用压力感知的词典序策略替代简单的静态线性加权方法。在Google Kubernetes Engine(GKE)中的评估表明,AGMARL-DKS在容错性、利用率和成本方面显著优于默认调度器,尤其在调度批处理和关键任务工作负载时表现突出。