We study offline multitask reinforcement learning in settings where multiple tasks share a low-rank representation of their action-value functions. In this regime, a learner is provided with fixed datasets collected from several related tasks, without access to further online interaction, and seeks to exploit shared structure to improve statistical efficiency and generalization. We analyze a multitask variant of fitted Q-iteration that jointly learns a shared representation and task-specific value functions via Bellman error minimization on offline data. Under standard realizability and coverage assumptions commonly used in offline reinforcement learning, we establish finite-sample generalization guarantees for the learned value functions. Our analysis explicitly characterizes how pooling data across tasks improves estimation accuracy, yielding a $1/\sqrt{nT}$ dependence on the total number of samples across tasks, while retaining the usual dependence on the horizon and concentrability coefficients arising from distribution shift. In addition, we consider a downstream offline setting in which a new task shares the same underlying representation as the upstream tasks. We study how reusing the representation learned during the multitask phase affects value estimation for this new task, and show that it can reduce the effective complexity of downstream learning relative to learning from scratch. Together, our results clarify the role of shared representations in multitask offline Q-learning and provide theoretical insight into when and how multitask structure can improve generalization in model-free, value-based reinforcement learning.
翻译:我们研究离线多任务强化学习,其中多个任务共享其动作值函数的低秩表示。在此机制下,学习者获得从多个相关任务收集的固定数据集,无法进行进一步的在线交互,并试图利用共享结构来提高统计效率和泛化能力。我们分析了一种多任务版本的拟合Q迭代方法,该方法通过在离线数据上进行贝尔曼误差最小化,联合学习共享表示和任务特定的值函数。在离线强化学习中常用的标准可实现性和覆盖性假设下,我们为学习到的值函数建立了有限样本泛化保证。我们的分析明确刻画了跨任务数据池如何提高估计精度,得到了对跨任务总样本数$1/\sqrt{nT}$的依赖关系,同时保留了由分布偏移引起的对时间跨度和集中性系数的通常依赖。此外,我们考虑了一个下游离线设置,其中新任务与上游任务共享相同的基础表示。我们研究了在多任务阶段学习的表示的重用如何影响该新任务的值估计,并表明相对于从头开始学习,它可以降低下游学习的有效复杂度。总之,我们的结果阐明了共享表示在多任务离线Q学习中的作用,并为多任务结构何时以及如何在无模型、基于值的强化学习中改善泛化提供了理论见解。