This study presents a novel computer system performance optimization and adaptive workload management scheduling algorithm based on Q-learning. In modern computing environments, characterized by increasing data volumes, task complexity, and dynamic workloads, traditional static scheduling methods such as Round-Robin and Priority Scheduling fail to meet the demands of efficient resource allocation and real-time adaptability. By contrast, Q-learning, a reinforcement learning algorithm, continuously learns from system state changes, enabling dynamic scheduling and resource optimization. Through extensive experiments, the superiority of the proposed approach is demonstrated in both task completion time and resource utilization, outperforming traditional and dynamic resource allocation (DRA) algorithms. These findings are critical as they highlight the potential of intelligent scheduling algorithms based on reinforcement learning to address the growing complexity and unpredictability of computing environments. This research provides a foundation for the integration of AI-driven adaptive scheduling in future large-scale systems, offering a scalable, intelligent solution to enhance system performance, reduce operating costs, and support sustainable energy consumption. The broad applicability of this approach makes it a promising candidate for next-generation computing frameworks, such as edge computing, cloud computing, and the Internet of Things.
翻译:本研究提出了一种基于Q-learning的新型计算机系统性能优化与自适应工作负载管理调度算法。在现代计算环境中,数据量日益增长、任务复杂度不断提高、工作负载动态变化,传统的静态调度方法(如轮询调度和优先级调度)已无法满足高效资源分配和实时适应性的需求。相比之下,Q-learning作为一种强化学习算法,能够持续从系统状态变化中学习,实现动态调度与资源优化。通过大量实验,本方法在任务完成时间和资源利用率方面均展现出优越性,其性能超越了传统调度算法及动态资源分配算法。这些发现具有重要意义,它们突显了基于强化学习的智能调度算法在应对日益复杂且不可预测的计算环境方面的潜力。本研究为未来大规模系统中集成人工智能驱动的自适应调度奠定了基础,提供了一种可扩展的智能解决方案,以提升系统性能、降低运营成本并支持可持续能耗。该方法的广泛适用性使其成为下一代计算框架(如边缘计算、云计算和物联网)中极具前景的技术方案。