Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. In real-world scenarios, data collection could be costly and risky; therefore, offline RL becomes particularly challenging when the in-domain data is limited. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces $\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate $\textbf{LaMo}$ achieves excellent performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.
翻译:离线强化学习(RL)旨在利用预先收集的数据集寻找接近最优的策略。在现实场景中,数据收集可能成本高昂且存在风险;因此,当领域内数据有限时,离线强化学习变得尤为困难。鉴于大语言模型(LLMs)的最新进展及其少样本学习能力,本文提出 $\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$),这是一个基于决策Transformer的通用框架,旨在有效利用预训练语言模型(LMs)进行离线强化学习。我们的框架突出了四个关键组成部分:(1) 使用序列预训练的语言模型初始化决策Transformer,(2) 采用LoRA微调方法(而非全权重微调),以有效结合语言模型的预训练知识与领域内知识,(3) 使用非线性MLP变换而非线性投影来生成嵌入表示,以及(4) 在微调过程中集成辅助语言预测损失,以稳定语言模型并保留其原有的语言能力。实证结果表明,$\textbf{LaMo}$ 在稀疏奖励任务中取得了优异性能,并在密集奖励任务中缩小了基于值的离线强化学习方法与决策Transformer之间的差距。特别地,我们的方法在数据样本有限的场景中展现出卓越的性能。