Motion modeling is critical in flow-based Video Frame Interpolation (VFI). Existing paradigms either consider linear combinations of bidirectional flows or directly predict bilateral flows for given timestamps without exploring favorable motion priors, thus lacking the capability of effectively modeling spatiotemporal dynamics in real-world videos. To address this limitation, in this study, we introduce Generalizable Implicit Motion Modeling (GIMM), a novel and effective approach to motion modeling for VFI. Specifically, to enable GIMM as an effective motion modeling paradigm, we design a motion encoding pipeline to model spatiotemporal motion latent from bidirectional flows extracted from pre-trained flow estimators, effectively representing input-specific motion priors. Then, we implicitly predict arbitrary-timestep optical flows within two adjacent input frames via an adaptive coordinate-based neural network, with spatiotemporal coordinates and motion latent as inputs. Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion. We show that GIMM performs better than the current state of the art on standard VFI benchmarks.
翻译:运动建模在基于光流的视频帧插值中至关重要。现有范式要么考虑双向光流的线性组合,要么直接为给定时间戳预测双边光流,而未探索有利的运动先验,因此缺乏对真实视频中时空动态进行有效建模的能力。为克服这一局限,本研究提出通用化隐式运动建模——一种新颖有效的VFI运动建模方法。具体而言,为使GIMM成为有效的运动建模范式,我们设计了运动编码流程,从预训练光流估计器提取的双向光流中建模时空运动隐变量,有效表征输入特定的运动先验。随后,我们通过基于坐标的自适应神经网络,以时空坐标和运动隐变量作为输入,隐式预测相邻输入帧间任意时间步的光流。通过提供精确建模的运动信息,我们的GIMM可轻松与现有基于光流的VFI方法集成。实验表明,GIMM在标准VFI基准测试中优于当前最优方法。