Recent advances in diffusion-based video generation have substantially improved visual fidelity and temporal coherence. However, most existing approaches remain task-specific and rely primarily on textual instructions, limiting their ability to handle multimodal inputs, contextual references, and diverse video generation and editing scenarios within a unified framework. Moreover, many video editing methods depend on carefully engineered pipelines tailored to individual operations, which hinders scalability and composability. In this paper, we propose Tele-Omni, a unified multimodal framework for video generation and editing that follows multimodal instructions, including text, images, and reference videos, within a single model. Tele-Omni leverages pretrained multimodal large language models to parse heterogeneous instructions and infer structured generation or editing intents, while diffusion-based generators perform high-quality video synthesis conditioned on these structured signals. To enable joint training across heterogeneous video tasks, we introduce a task-aware data processing pipeline that unifies multimodal inputs into a structured instruction format while preserving task-specific constraints. Tele-Omni supports a wide range of video-centric tasks, including text-to-video generation, image-to-video generation, first-last-frame video generation, in-context video generation, and in-context video editing. By decoupling instruction parsing from video synthesis and combining it with task-aware data design, Tele-Omni achieves flexible multimodal control while maintaining strong temporal coherence and visual consistency. Experimental results demonstrate that Tele-Omni achieves competitive performance across multiple tasks.
翻译:近年来,基于扩散模型的视频生成技术已在视觉保真度和时序连贯性方面取得显著进展。然而,现有方法大多仍局限于特定任务,且主要依赖文本指令,这限制了其在统一框架内处理多模态输入、上下文参考以及多样化视频生成与编辑场景的能力。此外,许多视频编辑方法依赖于为单一操作精心设计的专用流程,这阻碍了方法的可扩展性与可组合性。本文提出Tele-Omni,一个统一的多模态视频生成与编辑框架,能够在单一模型内遵循包括文本、图像和参考视频在内的多模态指令。Tele-Omni利用预训练的多模态大语言模型来解析异构指令并推断结构化的生成或编辑意图,同时基于扩散的生成器根据这些结构化信号执行高质量的视频合成。为实现跨异构视频任务的联合训练,我们引入了一种任务感知的数据处理流程,将多模态输入统一为结构化指令格式,同时保留任务特定的约束。Tele-Omni支持广泛的视频中心任务,包括文本到视频生成、图像到视频生成、首尾帧视频生成、上下文视频生成以及上下文视频编辑。通过将指令解析与视频合成解耦,并结合任务感知的数据设计,Tele-Omni在保持强大时序连贯性和视觉一致性的同时,实现了灵活的多模态控制。实验结果表明,Tele-Omni在多项任务上均取得了具有竞争力的性能。