Motion modeling is critical in flow-based Video Frame Interpolation (VFI). Existing paradigms either consider linear combinations of bidirectional flows or directly predict bilateral flows for given timestamps without exploring favorable motion priors, thus lacking the capability of effectively modeling spatiotemporal dynamics in real-world videos. To address this limitation, in this study, we introduce Generalizable Implicit Motion Modeling (GIMM), a novel and effective approach to motion modeling for VFI. Specifically, to enable GIMM as an effective motion modeling paradigm, we design a motion encoding pipeline to model spatiotemporal motion latent from bidirectional flows extracted from pre-trained flow estimators, effectively representing input-specific motion priors. Then, we implicitly predict arbitrary-timestep optical flows within two adjacent input frames via an adaptive coordinate-based neural network, with spatiotemporal coordinates and motion latent as inputs. Our GIMM can be smoothly integrated with existing flow-based VFI works without further modifications. We show that GIMM performs better than the current state of the art on the VFI benchmarks.
翻译:在基于光流的视频帧插值中,运动建模至关重要。现有范式要么考虑双向光流的线性组合,要么直接为给定时间戳预测双向光流,而未探索有利的运动先验,因而缺乏对真实视频中时空动态进行有效建模的能力。为克服这一局限,本研究提出了一种新颖有效的视频帧插值运动建模方法——可泛化的隐式运动建模。具体而言,为使GIMM成为有效的运动建模范式,我们设计了一个运动编码流程,从预训练光流估计器提取的双向光流中建模时空运动隐变量,从而有效表征输入特定的运动先验。随后,我们通过一个基于坐标的自适应神经网络,以时空坐标和运动隐变量作为输入,隐式地预测相邻输入帧之间任意时间步的光流。我们的GIMM能够无需额外修改即可平滑集成到现有的基于光流的视频帧插值工作中。实验表明,GIMM在视频帧插值基准测试中超越了当前最优方法。