Offline meta-reinforcement learning aims to equip agents with the ability to rapidly adapt to new tasks by training on data from a set of different tasks. Context-based approaches utilize a history of state-action-reward transitions -- referred to as the context -- to infer representations of the current task, and then condition the agent, i.e., the policy and value function, on the task representations. Intuitively, the better the task representations capture the underlying tasks, the better the agent can generalize to new tasks. Unfortunately, context-based approaches suffer from distribution mismatch, as the context in the offline data does not match the context at test time, limiting their ability to generalize to the test tasks. This leads to the task representations overfitting to the offline training data. Intuitively, the task representations should be independent of the behavior policy used to collect the offline data. To address this issue, we approximately minimize the mutual information between the distribution over the task representations and behavior policy by maximizing the entropy of behavior policy conditioned on the task representations. We validate our approach in MuJoCo environments, showing that compared to baselines, our task representations more faithfully represent the underlying tasks, leading to outperforming prior methods in both in-distribution and out-of-distribution tasks.
翻译:离线元强化学习旨在通过在不同任务的数据集上进行训练,使智能体具备快速适应新任务的能力。基于上下文的方法利用状态-动作-奖励转移的历史(称为上下文)来推断当前任务的表示,然后将智能体(即策略和价值函数)以任务表示为条件。直观而言,任务表示对底层任务的捕捉越准确,智能体对新任务的泛化能力就越强。然而,基于上下文的方法存在分布不匹配问题,因为离线数据中的上下文与测试时的上下文不匹配,这限制了它们对测试任务的泛化能力,导致任务表示对离线训练数据过拟合。直观上,任务表示应与用于收集离线数据的行为策略相互独立。为解决这一问题,我们通过最大化以任务表示为条件的行为策略的熵,近似最小化任务表示分布与行为策略之间的互信息。我们在MuJoCo环境中验证了所提方法,结果表明与基线方法相比,我们的任务表示能更准确地反映底层任务,从而在分布内和分布外任务上均优于现有方法。