When interactively exploring video data, video-native querying involves consuming query results as videos, including steps such as compilation of extracted video clips or data overlays. These video-native queries are bottlenecked by rendering, not the execution of the underlying queries. This rendering is currently performed using post-processing scripts that are often slow. This step poses a critical point of friction in interactive video data workloads: even short clips contain thousands of high-definition frames; conventional OpenCV/Python scripts must decode -> transform -> encode the entire data stream before a single pixel appears, leaving users waiting for many seconds, minutes, or hours. To address these issues, we present Vidformer, a drop-in rendering accelerator for video-native querying which, (i) transparently lifts existing visualization code into a declarative representation, (ii) transparently optimizes and parallelizes rendering, and (iii) instantly serves videos through a Video on Demand protocol with just-in-time segment rendering. We demonstrate that Vidformer cuts full-render time by 2-3x across diverse annotation workloads, and, more critically, drops time-to-playback to 0.25-0.5s. This represents a 400x improvement that decouples clip length from first-frame playback latency, and unlocks the ability to perform interactive video-native querying with sub-second latencies. Furthermore, we show how our approach enables interactive video-native LLM-based conversational querying as well.
翻译:在交互式探索视频数据时,视频原生查询涉及以视频形式消费查询结果,包括编译提取的视频片段或数据叠加等步骤。这些视频原生查询的瓶颈在于渲染环节,而非底层查询的执行。目前,这种渲染通过后处理脚本实现,通常速度缓慢。这一步骤在交互式视频数据工作负载中构成了关键摩擦点:即使是短视频片段也包含数千个高清帧;传统的OpenCV/Python脚本必须在单个像素出现前对整个数据流进行解码->变换->编码,导致用户需要等待数秒、数分钟甚至数小时。为解决这些问题,我们提出了Vidformer——一个用于视频原生查询的即插即用渲染加速器,它能够:(i) 将现有可视化代码透明地提升为声明式表示,(ii) 透明地优化和并行化渲染过程,(iii) 通过视频点播协议配合即时片段渲染技术实现视频的即时传输。我们证明,Vidformer在各种标注工作负载中可将完整渲染时间缩短2-3倍,更重要的是能将播放就绪时间降至0.25-0.5秒。这实现了400倍的性能提升,使片段长度与首帧播放延迟解耦,从而解锁了亚秒级延迟的交互式视频原生查询能力。此外,我们还展示了该方法如何支持基于大语言模型的交互式视频原生对话查询。