Models based on the Transformer architecture have seen widespread application across fields such as natural language processing, computer vision, and robotics, with large language models like ChatGPT revolutionizing machine understanding of human language and demonstrating impressive memory and reproduction capabilities. Traditional machine learning algorithms struggle with catastrophic forgetting, which is detrimental to the diverse and generalized abilities required for robotic deployment. This paper investigates the Receptance Weighted Key Value (RWKV) framework, known for its advanced capabilities in efficient and effective sequence modeling, and its integration with the decision transformer and experience replay architectures. It focuses on potential performance enhancements in sequence decision-making and lifelong robotic learning tasks. We introduce the Decision-RWKV (DRWKV) model and conduct extensive experiments using the D4RL database within the OpenAI Gym environment and on the D'Claw platform to assess the DRWKV model's performance in single-task tests and lifelong learning scenarios, showcasing its ability to handle multiple subtasks efficiently. The code for all algorithms, training, and image rendering in this study is open-sourced at https://github.com/ancorasir/DecisionRWKV.
翻译:基于Transformer架构的模型已在自然语言处理、计算机视觉和机器人学等领域得到广泛应用,以ChatGPT为代表的大语言模型不仅革新了机器对人类语言的理解能力,更展现出卓越的记忆与复现性能。传统机器学习算法常受困于灾难性遗忘问题,这对机器人部署所需的多样化与泛化能力构成严重制约。本文研究了以高效序列建模能力著称的Receptance加权键值(RWKV)框架,及其与决策Transformer和经验回放架构的融合机制,重点探讨其在序列决策与机器人终身学习任务中的性能提升潜力。我们提出决策RWKV(DRWKV)模型,通过在OpenAI Gym环境中的D4RL数据库及D'Claw平台上开展系统实验,评估DRWKV在单任务测试与终身学习场景中的表现,验证其高效处理多重子任务的能力。本研究涉及的所有算法、训练及图像渲染代码均已开源:https://github.com/ancorasir/DecisionRWKV。