The arrival of Sora marks a new era for text-to-video diffusion models, bringing significant advancements in video generation and potential applications. However, Sora, along with other text-to-video diffusion models, is highly reliant on prompts, and there is no publicly available dataset that features a study of text-to-video prompts. In this paper, we introduce VidProM, the first large-scale dataset comprising 1.67 Million unique text-to-Video Prompts from real users. Additionally, this dataset includes 6.69 million videos generated by four state-of-the-art diffusion models, alongside some related data. We initially discuss the curation of this large-scale dataset, a process that is both time-consuming and costly. Subsequently, we underscore the need for a new prompt dataset specifically designed for text-to-video generation by illustrating how VidProM differs from DiffusionDB, a large-scale prompt-gallery dataset for image generation. Our extensive and diverse dataset also opens up many exciting new research areas. For instance, we suggest exploring text-to-video prompt engineering, efficient video generation, and video copy detection for diffusion models to develop better, more efficient, and safer models. The project (including the collected dataset VidProM and related code) is publicly available at https://vidprom.github.io under the CC-BY-NC 4.0 License.
翻译:Sora的到来标志着文本到视频扩散模型进入了一个新时代,为视频生成及其潜在应用带来了显著进步。然而,Sora以及其他文本到视频扩散模型高度依赖于提示词,且目前尚无公开的数据集专门研究文本到视频提示。本文介绍了VidProM,这是首个包含来自真实用户的167万个独特文本到视频提示的大规模数据集。此外,该数据集还包含了由四种先进扩散模型生成的669万个视频,以及一些相关数据。我们首先讨论了这一大规模数据集的构建过程,该过程耗时且成本高昂。随后,通过阐明VidProM与DiffusionDB(一个面向图像生成的大规模提示-图库数据集)的区别,我们强调了专门为文本到视频生成设计新提示数据集的必要性。我们广泛且多样化的数据集也开辟了许多令人兴奋的新研究领域。例如,我们建议探索文本到视频提示工程、高效视频生成以及针对扩散模型的视频复制检测,以开发更好、更高效且更安全的模型。该项目(包括收集的数据集VidProM及相关代码)在CC-BY-NC 4.0许可下公开于https://vidprom.github.io。