The idea of decision-aware model learning, that models should be accurate where it matters for decision-making, has gained prominence in model-based reinforcement learning. While promising theoretical results have been established, the empirical performance of algorithms leveraging a decision-aware loss has been lacking, especially in continuous control problems. In this paper, we present a study on the necessary components for decision-aware reinforcement learning models and we showcase design choices that enable well-performing algorithms. To this end, we provide a theoretical and empirical investigation into algorithmic ideas in the field. We highlight that empirical design decisions established in the MuZero line of works, most importantly the use of a latent model, are vital to achieving good performance for related algorithms. Furthermore, we show that the MuZero loss function is biased in stochastic environments and establish that this bias has practical consequences. Building on these findings, we present an overview of which decision-aware loss functions are best used in what empirical scenarios, providing actionable insights to practitioners in the field.
翻译:决策感知模型学习(即模型应在对决策至关重要的区域保持高准确性)这一理念已在基于模型的强化学习领域获得广泛关注。尽管已有理论成果展示出令人瞩目的前景,但采用决策感知损失的算法在实证表现上仍显不足,尤其在连续控制问题中。本文系统研究了决策感知强化学习模型的关键组成要素,并展示了能够实现优异性能的算法设计选择。为此,我们对该领域中的算法思想进行了理论与实证双重探究。研究表明,MuZero系列工作所确立的实证设计决策(尤其是潜在模型的使用)对于相关算法取得良好性能至关重要。此外,我们论证了MuZero损失函数在随机环境中存在偏置性,并证实该偏置具有实际影响。基于这些发现,我们全面梳理了不同决策感知损失函数在不同实证场景下的最优应用方案,为该领域从业者提供可操作的实践指导。