VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
翻译:VILA-U是一种集视频、图像、语言理解与生成于一体的统一基础模型。传统的视觉语言模型(VLM)通常采用分离的模块来处理视觉内容的理解与生成,这可能导致任务间的不对齐并增加系统复杂性。相比之下,VILA-U为这两类任务设计了统一的自回归下一词元预测框架,无需依赖扩散模型等额外组件。该方法不仅简化了模型结构,同时在视觉语言理解与生成任务上达到了接近最先进的性能。VILA-U的成功主要归因于两个关键因素:其一是通过预训练阶段将离散视觉词元与文本输入对齐的统一视觉编码塔,显著提升了视觉感知能力;其二是基于高质量数据集的自回归图像生成能够达到与扩散模型相媲美的质量。这使得VILA-U能够通过完全基于词元的自回归框架,实现与更复杂模型相当的性能。