We present EditIQ, a completely automated framework for cinematically editing scenes captured via a stationary, large field-of-view and high-resolution camera. From the static camera feed, EditIQ initially generates multiple virtual feeds, emulating a team of cameramen. These virtual camera shots termed rushes are subsequently assembled using an automated editing algorithm, whose objective is to present the viewer with the most vivid scene content. To understand key scene elements and guide the editing process, we employ a two-pronged approach: (1) a large language model (LLM)-based dialogue understanding module to analyze conversational flow, coupled with (2) visual saliency prediction to identify meaningful scene elements and camera shots therefrom. We then formulate cinematic video editing as an energy minimization problem over shot selection, where cinematic constraints determine shot choices, transitions, and continuity. EditIQ synthesizes an aesthetically and visually compelling representation of the original narrative while maintaining cinematic coherence and a smooth viewing experience. Efficacy of EditIQ against competing baselines is demonstrated via a psychophysical study involving twenty participants on the BBC Old School dataset plus eleven theatre performance videos. Video samples from EditIQ can be found at https://editiq-ave.github.io/.
翻译:本文提出EditIQ,一种用于静态大视场高分辨率摄像机拍摄场景的完全自动化电影化剪辑框架。EditIQ首先从静态摄像机源生成多个虚拟视频流,模拟一组摄像师的工作。这些被称为"毛片"的虚拟摄像机镜头随后通过自动剪辑算法进行组装,其目标是为观众呈现最生动的场景内容。为理解关键场景元素并指导剪辑过程,我们采用双管齐下的方法:(1) 基于大语言模型(LLM)的对话理解模块分析对话流;(2) 视觉显著性预测以识别有意义的场景元素及相应镜头。我们将电影化视频剪辑形式化为镜头选择的能量最小化问题,其中电影化约束决定镜头选择、转场和连续性。EditIQ在保持电影连贯性与流畅观看体验的同时,合成了兼具美学吸引力与视觉表现力的原始叙事呈现。通过在BBC Old School数据集及十一部剧场表演视频上对二十名参与者开展的心理物理学研究,证明了EditIQ相较于基线方法的有效性。EditIQ的视频样本可在https://editiq-ave.github.io/查看。