With the development of VR-related techniques, viewers can enjoy a realistic and immersive experience through a head-mounted display, while omnidirectional video with a low frame rate can lead to user dizziness. However, the prevailing plane frame interpolation methodologies are unsuitable for Omnidirectional Video Interpolation, chiefly due to the lack of models tailored to such videos with strong distortion, compounded by the scarcity of valuable datasets for Omnidirectional Video Frame Interpolation. In this paper, we introduce the benchmark dataset, 360VFI, for Omnidirectional Video Frame Interpolation. We present a practical implementation that introduces a distortion prior from omnidirectional video into the network to modulate distortions. We especially propose a pyramid distortion-sensitive feature extractor that uses the unique characteristics of equirectangular projection (ERP) format as prior information. Moreover, we devise a decoder that uses an affine transformation to facilitate the synthesis of intermediate frames further. 360VFI is the first dataset and benchmark that explores the challenge of Omnidirectional Video Frame Interpolation. Through our benchmark analysis, we presented four different distortion conditions scenes in the proposed 360VFI dataset to evaluate the challenge triggered by distortion during interpolation. Besides, experimental results demonstrate that Omnidirectional Video Interpolation can be effectively improved by modeling for omnidirectional distortion.
翻译:随着VR相关技术的发展,观众可通过头戴式显示器获得逼真且沉浸式的体验,然而低帧率的全向视频可能导致用户眩晕。然而,当前主流的平面帧插值方法并不适用于全向视频插值,这主要是由于缺乏针对此类强失真视频的定制模型,加之可用于全向视频帧插值的优质数据集稀缺。本文中,我们提出了用于全向视频帧插值的基准数据集360VFI。我们提出了一种实用实现方案,将全向视频的失真先验引入网络以调制失真。我们特别设计了一种金字塔式失真敏感特征提取器,该提取器利用等距柱状投影(ERP)格式的独特特性作为先验信息。此外,我们设计了一种解码器,通过仿射变换进一步促进中间帧的合成。360VFI是首个探索全向视频帧插值挑战的数据集与基准测试。通过基准分析,我们在所提出的360VFI数据集中构建了四种不同失真条件的场景,以评估插值过程中由失真引发的挑战。此外,实验结果表明,通过对全向失真进行建模,可有效提升全向视频插值的性能。