Recent video diffusion models have made remarkable strides in visual quality, yet precise, fine-grained control remains a key bottleneck that limits practical customizability for content creation. For AI video creators, three forms of control are crucial: (i) scene composition, (ii) multi-view consistent subject customization, and (iii) camera-pose or object-motion adjustment. Existing methods typically handle these dimensions in isolation, with limited support for multi-view subject synthesis and identity preservation under arbitrary pose changes. This lack of a unified architecture makes it difficult to support versatile, jointly controllable video. We introduce Tri-Prompting, a unified framework and two-stage training paradigm that integrates scene composition, multi-view subject consistency, and motion control. Our approach leverages a dual-condition motion module driven by 3D tracking points for background scenes and downsampled RGB cues for foreground subjects. To ensure a balance between controllability and visual realism, we further propose an inference ControlNet scale schedule. Tri-Prompting supports novel workflows, including 3D-aware subject insertion into any scenes and manipulation of existing subjects in an image. Experimental results demonstrate that Tri-Prompting significantly outperforms specialized baselines such as Phantom and DaS in multi-view subject identity, 3D consistency, and motion accuracy.
翻译:近期,视频扩散模型在视觉质量方面取得了显著进展,然而精确、细粒度的控制仍是关键瓶颈,限制了内容创作的实际可定制性。对于AI视频创作者而言,三种控制形式至关重要:(i)场景构图,(ii)多视角一致的主体定制,以及(iii)相机位姿或物体运动调整。现有方法通常孤立处理这些维度,对任意姿态变化下的多视角主体合成与身份保持的支持有限。这种统一架构的缺乏使得支持多功能、联合可控的视频变得困难。我们提出了Tri-Prompting,一个统一的框架和两阶段训练范式,集成了场景构图、多视角主体一致性和运动控制。我们的方法利用由3D跟踪点驱动的双条件运动模块处理背景场景,并采用下采样RGB线索处理前景主体。为确保可控性与视觉真实感之间的平衡,我们进一步提出了推理阶段ControlNet尺度调度策略。Tri-Prompting支持新颖的工作流程,包括将3D感知主体插入任意场景以及对图像中现有主体进行操控。实验结果表明,Tri-Prompting在多视角主体身份保持、3D一致性和运动准确性方面显著优于Phantom和DaS等专用基线方法。