We study the problem of infinite-horizon average-reward reinforcement learning with linear Markov decision processes (MDPs). The associated Bellman operator of the problem not being a contraction makes the algorithm design challenging. Previous approaches either suffer from computational inefficiency or require strong assumptions on dynamics, such as ergodicity, for achieving a regret bound of $\widetilde{O}(\sqrt{T})$. In this paper, we propose the first algorithm that achieves $\widetilde{O}(\sqrt{T})$ regret with computational complexity polynomial in the problem parameters, without making strong assumptions on dynamics. Our approach approximates the average-reward setting by a discounted MDP with a carefully chosen discounting factor, and then applies an optimistic value iteration. We propose an algorithmic structure that plans for a nonstationary policy through optimistic value iteration and follows that policy until a specified information metric in the collected data doubles. Additionally, we introduce a value function clipping procedure for limiting the span of the value function for sample efficiency.
翻译:本研究探讨具有线性马尔可夫决策过程(MDP)的无限时域平均奖励强化学习问题。由于该问题对应的贝尔曼算子不具备压缩性,使得算法设计面临挑战。现有方法要么存在计算效率低下的问题,要么需要基于动态性的强假设(如遍历性)才能实现$\widetilde{O}(\sqrt{T})$的遗憾界。本文提出首个在问题参数上具有多项式计算复杂度、且无需对动态性作强假设即可实现$\widetilde{O}(\sqrt{T})$遗憾界的算法。我们的方法通过精心选择折扣因子,用折扣MDP逼近平均奖励设定,进而应用乐观值迭代。我们提出一种算法框架,该框架通过乐观值迭代规划非平稳策略,并遵循该策略直至收集数据中的特定信息度量翻倍。此外,我们引入值函数裁剪程序以限制值函数的跨度,从而提升样本效率。