In this paper, a reinforcement-learning-based scheduling framework is proposed and implemented to optimize the application-layer quality-of-service (QoS) of a practical wireless local area network (WLAN) suffering from unknown interference. Particularly, application-layer tasks of file delivery and delay-sensitive communication, e.g., screen projection, in a WLAN with enhanced distributed channel access (EDCA) mechanism, are jointly scheduled by adjusting the contention window sizes and application-layer throughput limitation, such that their QoS, including the throughput of file delivery and the round trip time of the delay-sensitive communication, can be optimized. Due to the unknown interference and vendor-dependent implementation of the network interface card, the relation between the scheduling policy and the system QoS is unknown. Hence, a reinforcement learning method is proposed, in which a novel Q-network is trained to map from the historical scheduling parameters and QoS observations to the current scheduling action. It is demonstrated on a testbed that the proposed framework can achieve a significantly better QoS than the conventional EDCA mechanism.
翻译:本文提出并实现了一种基于强化学习的调度框架,用于优化遭受未知干扰的实际无线局域网(WLAN)的应用层服务质量(QoS)。具体而言,在采用增强分布式信道接入(EDCA)机制的WLAN中,通过调整竞争窗口大小与应用层吞吐量限制,联合调度文件传输和时延敏感通信(如屏幕投影)等应用层任务,从而优化其QoS,包括文件传输的吞吐量和时延敏感通信的往返时间。由于未知干扰及网络接口卡供应商相关实现的差异,调度策略与系统QoS之间的关联未知。为此,本文提出一种强化学习方法,训练一种新型Q网络,从历史调度参数和QoS观测值映射到当前调度动作。在测试平台上验证表明,本文所提框架相比传统EDCA机制能够实现显著更优的QoS。