Real-time character control is an essential component for interactive experiences, with a broad range of applications, including physics simulations, video games, and virtual reality. The success of diffusion models for image synthesis has led to the use of these models for motion synthesis. However, the majority of these motion diffusion models are primarily designed for offline applications, where space-time models are used to synthesize an entire sequence of frames simultaneously with a pre-specified length. To enable real-time motion synthesis with diffusion model that allows time-varying controls, we propose A-MDM (Auto-regressive Motion Diffusion Model). Our conditional diffusion model takes an initial pose as input, and auto-regressively generates successive motion frames conditioned on the previous frame. Despite its streamlined network architecture, which uses simple MLPs, our framework is capable of generating diverse, long-horizon, and high-fidelity motion sequences. Furthermore, we introduce a suite of techniques for incorporating interactive controls into A-MDM, such as task-oriented sampling, in-painting, and hierarchical reinforcement learning. These techniques enable a pre-trained A-MDM to be efficiently adapted for a variety of new downstream tasks. We conduct a comprehensive suite of experiments to demonstrate the effectiveness of A-MDM, and compare its performance against state-of-the-art auto-regressive methods.
翻译:实时角色控制是交互体验的关键组成部分,在物理仿真、视频游戏和虚拟现实等领域具有广泛应用。扩散模型在图像合成领域的成功推动了其在运动合成中的应用。然而,现有的大多数运动扩散模型主要面向离线应用,其时空模型可同时合成具有预设长度的完整帧序列。为实现支持时变控制的扩散模型实时运动合成,我们提出了A-MDM(自回归运动扩散模型)。我们的条件扩散模型以初始姿态作为输入,并以前一帧为条件自回归地生成连续运动帧。尽管采用了基于简单MLP的简洁网络架构,我们的框架仍能生成多样化、长时程且高保真的运动序列。此外,我们提出了一套将交互控制融入A-MDM的技术方案,包括任务导向采样、修复式生成和分层强化学习。这些技术使得预训练的A-MDM能够高效适配多种新的下游任务。我们通过系统性的实验验证了A-MDM的有效性,并与当前最先进的自回归方法进行了性能比较。