Text-to-video generation has evolved rapidly in recent years, delivering remarkable results. Training typically relies on video-caption paired data, which plays a crucial role in enhancing generation performance. However, current video captions often suffer from insufficient details, hallucinations and imprecise motion depiction, affecting the fidelity and consistency of generated videos. In this work, we propose a novel instance-aware structured caption framework, termed InstanceCap, to achieve instance-level and fine-grained video caption for the first time. Based on this scheme, we design an auxiliary models cluster to convert original video into instances to enhance instance fidelity. Video instances are further used to refine dense prompts into structured phrases, achieving concise yet precise descriptions. Furthermore, a 22K InstanceVid dataset is curated for training, and an enhancement pipeline that tailored to InstanceCap structure is proposed for inference. Experimental results demonstrate that our proposed InstanceCap significantly outperform previous models, ensuring high fidelity between captions and videos while reducing hallucinations.
翻译:近年来,文本到视频生成技术发展迅速,取得了显著成果。训练过程通常依赖于视频-描述配对数据,这对提升生成性能起着至关重要的作用。然而,当前的视频描述往往存在细节不足、幻觉效应以及运动描绘不精确等问题,影响了生成视频的保真度和一致性。在本工作中,我们首次提出了一种新颖的实例感知结构化描述框架,称为InstanceCap,以实现实例级和细粒度的视频描述。基于此方案,我们设计了一个辅助模型集群,将原始视频转换为实例以增强实例保真度。视频实例进一步用于将密集提示词精炼为结构化短语,从而实现简洁而精确的描述。此外,我们构建了一个包含22K样本的InstanceVid数据集用于训练,并提出了一种专为InstanceCap结构定制的增强推理流程。实验结果表明,我们提出的InstanceCap显著优于先前模型,在确保描述与视频之间高保真度的同时减少了幻觉效应。