Adaptive Mixed-Criticality (AMC) is a fixed-priority preemptive scheduling algorithm for mixed-criticality hard real-time systems. It dominates many other scheduling algorithms for mixed-criticality systems, but does so at the cost of occasionally dropping jobs of less important/critical tasks, when low-priority jobs overrun their time budgets. In this paper we enhance AMC with a deep reinforcement learning (DRL) approach based on a Deep-Q Network. The DRL agent is trained off-line, and at run-time adjusts the low-criticality budgets of tasks to avoid budget overruns, while ensuring that no job misses its deadline if it does not overrun its budget. We have implemented and evaluated this approach by simulating realistic workloads from the automotive domain. The results show that the agent is able to reduce budget overruns by at least up to 50%, even when the budget of each task is chosen based on sampling the distribution of its execution time. To the best of our knowledge, this is the first use of DRL in AMC reported in the literature.
翻译:自适应混合关键性(AMC)是一种面向混合关键性硬实时系统的固定优先级抢占式调度算法。该算法在混合关键性系统中优于许多其他调度算法,但其代价是在低优先级作业超出其时间预算时,偶尔会丢弃重要性/关键性较低的任务作业。本文提出一种基于深度Q网络的深度强化学习(DRL)方法对AMC进行增强。DRL智能体通过离线训练,在运行时动态调整任务的低关键性预算以避免预算超支,同时确保未超出预算的作业不会错过截止期限。我们通过模拟汽车领域的实际工作负载对该方法进行了实现与评估。结果表明,即使每个任务的预算均基于其执行时间分布的采样值设定,该智能体仍能将预算超支率降低至少50%。据我们所知,这是文献中首次报道在AMC中应用DRL的研究。