Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.
翻译:长上下文能力对于多模态基础模型至关重要,尤其是在长视频理解任务中。我们提出了LongVILA,一个通过算法与系统协同设计实现的长上下文视觉语言模型全栈解决方案。在模型训练方面,我们通过引入两个额外阶段——即长上下文扩展与长视频监督微调——来升级现有的视觉语言模型,以支持长视频理解。然而,长视频训练在计算和内存方面消耗巨大。我们引入了长上下文多模态序列并行系统,该系统能高效并行化长视频的训练与推理,使得在256个GPU上无需任何梯度检查点即可进行200万上下文长度的训练。LongVILA将VILA模型处理的视频帧数从8帧高效扩展至2048帧,在6000帧(超过100万个token)的“大海捞针”式视频测试中达到了99.8%的准确率。LongVILA-7B模型在9个主流视频基准测试上表现出色,例如在带字幕的VideoMME测试中达到65.1%的准确率。此外,MM-SP系统比环形序列并行方案快2.1至5.7倍,比采用混合上下文与张量并行的Megatron方案快1.1至1.4倍。该系统还能与Hugging Face Transformers实现无缝集成。