Despite the promising results achieved, state-of-the-art interactive reinforcement learning schemes rely on passively receiving supervision signals from advisor experts, in the form of either continuous monitoring or pre-defined rules, which inevitably result in a cumbersome and expensive learning process. In this paper, we introduce a novel initiative advisor-in-the-loop actor-critic framework, termed as Ask-AC, that replaces the unilateral advisor-guidance mechanism with a bidirectional learner-initiative one, and thereby enables a customized and efficacious message exchange between learner and advisor. At the heart of Ask-AC are two complementary components, namely action requester and adaptive state selector, that can be readily incorporated into various discrete actor-critic architectures. The former component allows the agent to initiatively seek advisor intervention in the presence of uncertain states, while the latter identifies the unstable states potentially missed by the former especially when environment changes, and then learns to promote the ask action on such states. Experimental results on both stationary and non-stationary environments and across different actor-critic backbones demonstrate that the proposed framework significantly improves the learning efficiency of the agent, and achieves the performances on par with those obtained by continuous advisor monitoring.
翻译:尽管现有交互式强化学习方法取得了显著成果,但其依赖被动接收来自顾问专家的监督信号(形式包括持续监控或预定义规则),这不可避免地导致学习过程繁琐且成本高昂。本文提出一种新颖的主动式顾问在环演员-评论家框架(称为Ask-AC),该框架以双向学习者主动机制取代单向顾问指导机制,从而实现学习者与顾问间定制化、高效的信息交互。Ask-AC的核心包含两个互补组件——动作请求器与自适应状态选择器,二者可灵活集成于各类离散演员-评论家架构中。前者使智能体能在不确定状态出现时主动寻求顾问干预,后者则专门识别前者可能遗漏的不稳定状态(尤其在环境动态变化时),并通过学习在此类状态上提升询问动作的概率。在静态与非静态环境中的实验结果表明,该框架能显著提升智能体的学习效率,且其性能与持续顾问监控所获效果相当。