Autoregressive models have demonstrated remarkable success in natural language processing. In this work, we design a simple yet effective autoregressive architecture for robotic manipulation tasks. We propose the Chunking Causal Transformer (CCT), which extends the next-single-token prediction of causal transformers to support multi-token prediction in a single pass. Further, we design a novel attention interleaving strategy that allows CCT to be trained efficiently with teacher-forcing. Based on CCT, we propose the Autoregressive Policy (ARP) model, which learns to generate action sequences autoregressively. We find that action sequence learning enables better leverage of the underlying causal relationships in robotic tasks. We evaluate ARP across diverse robotic manipulation environments, including Push-T, ALOHA, and RLBench, and show that it outperforms the state-of-the-art methods in all tested environments, while being more efficient in computation and parameter sizes. Video demonstrations, our source code, and the models of ARP can be found at http://github.com/mlzxy/arp.
翻译:自回归模型在自然语言处理领域已展现出卓越成效。本研究针对机器人操作任务设计了一种简洁高效的自回归架构。我们提出分块因果Transformer(Chunking Causal Transformer, CCT),将因果Transformer的单令牌预测扩展为支持单次前向传播的多令牌预测。进一步,我们设计了一种新颖的注意力交错策略,使CCT能够通过教师强制方法进行高效训练。基于CCT,我们提出自回归策略(Autoregressive Policy, ARP)模型,该模型通过自回归方式学习生成动作序列。我们发现动作序列学习能更好地利用机器人任务中潜在的因果关系。我们在多种机器人操作环境(包括Push-T、ALOHA和RLBench)中对ARP进行评估,结果表明其在所有测试环境中均优于现有最优方法,同时在计算效率和参数量方面更具优势。视频演示、源代码及ARP模型可通过http://github.com/mlzxy/arp获取。