VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
翻译:VILA-U是一个统一的基础模型,集成了视频、图像、语言的理解与生成。传统的视觉语言模型使用独立的模块来处理视觉内容的理解与生成,这可能导致任务间的不对齐并增加模型复杂性。相比之下,VILA-U采用单一的自回归下一词元预测框架来处理这两类任务,无需依赖扩散模型等额外组件。该方法不仅简化了模型结构,同时在视觉语言理解与生成任务上达到了接近最先进的性能。VILA-U的成功主要归因于两个关键因素:其一是预训练阶段通过统一的视觉编码塔将离散视觉词元与文本输入对齐,从而增强了视觉感知能力;其二是自回归图像生成在高质量数据集上能够达到与扩散模型相媲美的生成质量。这使得VILA-U能够基于完全词元化的自回归框架,实现与更复杂模型相当的性能。