Text-to-video (T2V) generation has recently garnered significant attention thanks to the large multi-modality model Sora. However, T2V generation still faces two important challenges: 1) Lacking a precise open sourced high-quality dataset. The previous popular video datasets, e.g. WebVid-10M and Panda-70M, are either with low quality or too large for most research institutions. Therefore, it is challenging but crucial to collect a precise high-quality text-video pairs for T2V generation. 2) Ignoring to fully utilize textual information. Recent T2V methods have focused on vision transformers, using a simple cross attention module for video generation, which falls short of thoroughly extracting semantic information from text prompt. To address these issues, we introduce OpenVid-1M, a precise high-quality dataset with expressive captions. This open-scenario dataset contains over 1 million text-video pairs, facilitating research on T2V generation. Furthermore, we curate 433K 1080p videos from OpenVid-1M to create OpenVidHD-0.4M, advancing high-definition video generation. Additionally, we propose a novel Multi-modal Video Diffusion Transformer (MVDiT) capable of mining both structure information from visual tokens and semantic information from text tokens. Extensive experiments and ablation studies verify the superiority of OpenVid-1M over previous datasets and the effectiveness of our MVDiT.
翻译:文本到视频(T2V)生成最近因大型多模态模型Sora而受到广泛关注。然而,T2V生成仍面临两个重要挑战:1)缺乏精确开源的高质量数据集。先前流行的视频数据集,例如WebVid-10M和Panda-70M,要么质量较低,要么规模过大,超出了大多数研究机构的处理能力。因此,为T2V生成收集精确的高质量文本-视频对既具挑战性又至关重要。2)未能充分利用文本信息。近期的T2V方法主要关注视觉Transformer,仅使用简单的交叉注意力模块进行视频生成,这不足以从文本提示中彻底提取语义信息。为解决这些问题,我们推出了OpenVid-1M,这是一个具有表达性描述文本的精确高质量数据集。该开放场景数据集包含超过100万个文本-视频对,有助于T2V生成的研究。此外,我们从OpenVid-1M中精选了43.3万个1080p视频,创建了OpenVidHD-0.4M,以推动高清视频生成的发展。另外,我们提出了一种新颖的多模态视频扩散Transformer(MVDiT),能够同时挖掘视觉令牌中的结构信息和文本令牌中的语义信息。大量的实验和消融研究验证了OpenVid-1M相较于先前数据集的优越性以及我们提出的MVDiT的有效性。