Generative modeling of complex behaviors from labeled datasets has been a longstanding problem in decision making. Unlike language or image generation, decision making requires modeling actions - continuous-valued vectors that are multimodal in their distribution, potentially drawn from uncurated sources, where generation errors can compound in sequential prediction. A recent class of models called Behavior Transformers (BeT) addresses this by discretizing actions using k-means clustering to capture different modes. However, k-means struggles to scale for high-dimensional action spaces or long sequences, and lacks gradient information, and thus BeT suffers in modeling long-range actions. In this work, we present Vector-Quantized Behavior Transformer (VQ-BeT), a versatile model for behavior generation that handles multimodal action prediction, conditional generation, and partial observations. VQ-BeT augments BeT by tokenizing continuous actions with a hierarchical vector quantization module. Across seven environments including simulated manipulation, autonomous driving, and robotics, VQ-BeT improves on state-of-the-art models such as BeT and Diffusion Policies. Importantly, we demonstrate VQ-BeT's improved ability to capture behavior modes while accelerating inference speed 5x over Diffusion Policies. Videos and code can be found https://sjlee.cc/vq-bet
翻译:从标注数据集中生成复杂行为的生成建模一直是决策制定中的一个长期难题。与语言或图像生成不同,决策制定需要对动作进行建模——这些动作是连续值向量,其分布是多模态的,可能来自未经筛选的数据源,且在序列预测中生成误差会不断累积。最近一类名为行为Transformer(BeT)的模型通过使用k-means聚类对动作进行离散化来捕捉不同的模式,从而解决了这一问题。然而,k-means难以扩展到高维动作空间或长序列,并且缺乏梯度信息,因此BeT在建模长程动作时表现不佳。在本工作中,我们提出了向量量化行为Transformer(VQ-BeT),这是一个用于行为生成的通用模型,能够处理多模态动作预测、条件生成和部分观测。VQ-BeT通过一个分层向量量化模块对连续动作进行标记化,从而增强了BeT。在包括模拟操控、自动驾驶和机器人学在内的七个环境中,VQ-BeT在BeT和扩散策略等最先进模型的基础上取得了改进。重要的是,我们证明了VQ-BeT在捕捉行为模式方面的改进能力,同时其推理速度比扩散策略快5倍。视频和代码可在 https://sjlee.cc/vq-bet 找到。