Standard Autoregressive Video LLMs inevitably suffer from causal masking biases that hinder global spatiotemporal modeling, leading to suboptimal understanding efficiency. We propose VidLaDA, a Video LLM based on Diffusion Language Model utilizing bidirectional attention to capture bidirectional dependencies. To further tackle the inference bottleneck of diffusion decoding on massive video tokens, we introduce MARS-Cache. This framework accelerates inference by combining asynchronous visual cache refreshing with frame-wise chunk attention, effectively pruning redundancy while preserving global connectivity via anchor tokens. Extensive experiments show VidLaDA outperforms diffusion baselines and rivals state-of-the-art autoregressive models (e.g., Qwen2.5-VL and LLaVA-Video), with MARS-Cache delivering over 12x speedup without compromising reasoning accuracy. Code and checkpoints are open-sourced at https://github.com/ziHoHe/VidLaDA.
翻译:标准自回归视频大语言模型不可避免地受到因果掩码偏置的影响,这阻碍了全局时空建模,导致理解效率欠佳。我们提出VidLaDA,一种基于扩散语言模型的视频大语言模型,它利用双向注意力机制来捕获双向依赖关系。为了进一步解决扩散解码在海量视频词元上的推理瓶颈,我们引入了MARS-Cache框架。该框架通过结合异步视觉缓存刷新与帧级分块注意力来加速推理,在通过锚定词元保持全局连通性的同时,有效剪枝冗余。大量实验表明,VidLaDA在性能上超越了扩散基线模型,并与最先进的自回归模型(如Qwen2.5-VL和LLaVA-Video)相媲美,同时MARS-Cache在不损害推理准确性的前提下实现了超过12倍的加速。代码与模型检查点已在 https://github.com/ziHoHe/VidLaDA 开源。