High-quality video generation, encompassing text-to-video (T2V), image-to-video (I2V), and video-to-video (V2V) generation, holds considerable significance in content creation to benefit anyone express their inherent creativity in new ways and world simulation to modeling and understanding the world. Models like SORA have advanced generating videos with higher resolution, more natural motion, better vision-language alignment, and increased controllability, particularly for long video sequences. These improvements have been driven by the evolution of model architectures, shifting from UNet to more scalable and parameter-rich DiT models, along with large-scale data expansion and refined training strategies. However, despite the emergence of DiT-based closed-source and open-source models, a comprehensive investigation into their capabilities and limitations remains lacking. Furthermore, the rapid development has made it challenging for recent benchmarks to fully cover SORA-like models and recognize their significant advancements. Additionally, evaluation metrics often fail to align with human preferences.
翻译:高质量视频生成——包括文本到视频(T2V)、图像到视频(I2V)以及视频到视频(V2V)生成——在内容创作与世界模拟领域具有重大意义。内容创作方面,它有助于人们以全新方式表达内在创造力;世界模拟方面,它服务于对世界的建模与理解。以SORA为代表的模型在生成视频方面取得了显著进展,实现了更高分辨率、更自然的运动、更优的视觉-语言对齐以及更强的可控性,尤其对于长视频序列而言。这些进步得益于模型架构的演进(从UNet转向更具可扩展性和参数规模的DiT模型)、大规模数据扩展以及精细化训练策略的推动。然而,尽管基于DiT的闭源与开源模型不断涌现,针对其能力与局限性的全面研究仍然匮乏。此外,该领域的快速发展使得现有基准测试难以全面覆盖SORA类模型并充分认识其重大进展。同时,现有评估指标往往与人类偏好存在偏差。