The dataset distributions in offline reinforcement learning (RL) often exhibit complex and multi-modal distributions, necessitating expressive policies to capture such distributions beyond widely-used Gaussian policies. To handle such complex and multi-modal datasets, in this paper, we propose Flow Actor-Critic, a new actor-critic method for offline RL, based on recent flow policies. The proposed method not only uses the flow model for actor as in previous flow policies but also exploits the expressive flow model for conservative critic acquisition to prevent Q-value explosion in out-of-data regions. To this end, we propose a new form of critic regularizer based on the flow behavior proxy model obtained as a byproduct of flow-based actor design. Leveraging the flow model in this joint way, we achieve new state-of-the-art performance for test datasets of offline RL including the D4RL and recent OGBench benchmarks.
翻译:离线强化学习中的数据集分布常呈现复杂且多模态的特性,这要求策略模型具备超越广泛使用的高斯策略的表达能力,以捕捉此类分布。为处理此类复杂多模态数据集,本文提出流行动作-评论家算法——一种基于新型流策略的离线强化学习动作-评论家方法。该方法不仅如先前流策略那样将流模型用于动作网络,还利用表达性流模型进行保守评论家估计,以防止数据外区域的Q值爆炸。为此,我们基于流行为代理模型(作为流基动作网络设计的副产品获得)提出了一种新型评论家正则化形式。通过这种联合利用流模型的方式,我们在包括D4RL和最新OGBench基准测试的离线强化学习测试数据集上实现了新的最优性能。