Existing DiT-based audio-driven avatar generation methods have achieved considerable progress, yet their broader application is constrained by limitations such as high computational overhead and the inability to synthesize long-duration videos. Autoregressive methods address this problem by applying block-wise autoregressive diffusion methods. However, these methods suffer from the problem of error accumulation and quality degradation. To address this, we propose JoyAvatar-Flash, an audio-driven autoregressive model capable of real-time inference and infinite-length video generation with the following contributions: (1) Progressive Step Bootstrapping (PSB), which allocates more denoising steps to initial frames to stabilize generation and reduce error accumulation; (2) Motion Condition Injection (MCI), enhancing temporal coherence by injecting noise-corrupted previous frames as motion condition; and (3) Unbounded RoPE via Cache-Resetting (URCR), enabling infinite-length generation through dynamic positional encoding. Our 1.3B-parameter causal model achieves 16 FPS on a single GPU and achieves competitive results in visual quality, temporal consistency, and lip synchronization.
翻译:现有的基于DiT的音频驱动虚拟人生成方法已取得显著进展,但其更广泛的应用受限于计算开销高和无法生成长时长视频等局限。自回归方法通过应用分块自回归扩散模型来解决此问题。然而,这些方法存在误差累积和质量退化的问题。为此,我们提出JoyAvatar-Flash,一种能够进行实时推理和无限长度视频生成的音频驱动自回归模型,其贡献如下:(1) 渐进步长引导,为初始帧分配更多去噪步数以稳定生成并减少误差累积;(2) 运动条件注入,通过注入含噪声的先前帧作为运动条件以增强时序一致性;(3) 基于缓存重置的无界RoPE,通过动态位置编码实现无限长度生成。我们的13亿参数因果模型在单GPU上达到16 FPS,并在视觉质量、时序一致性和唇形同步方面取得了有竞争力的结果。