With the rapid proliferation of 3D devices and the shortage of 3D content, stereo conversion is attracting increasing attention. Recent works introduce pretrained Diffusion Models (DMs) into this task. However, due to the scarcity of large-scale training data and comprehensive benchmarks, the optimal methodologies for employing DMs in stereo conversion and the accurate evaluation of stereo effects remain largely unexplored. In this work, we introduce the Mono2Stereo dataset, providing high-quality training data and benchmark to support in-depth exploration of stereo conversion. With this dataset, we conduct an empirical study that yields two primary findings. 1) The differences between the left and right views are subtle, yet existing metrics consider overall pixels, failing to concentrate on regions critical to stereo effects. 2) Mainstream methods adopt either one-stage left-to-right generation or warp-and-inpaint pipeline, facing challenges of degraded stereo effect and image distortion respectively. Based on these findings, we introduce a new evaluation metric, Stereo Intersection-over-Union, which prioritizes disparity and achieves a high correlation with human judgments on stereo effect. Moreover, we propose a strong baseline model, harmonizing the stereo effect and image quality simultaneously, and notably surpassing current mainstream methods. Our code and data will be open-sourced to promote further research in stereo conversion. Our models are available at mono2stereo-bench.github.io.
翻译:随着3D设备的快速普及与3D内容的短缺,立体转换正受到越来越多的关注。近期研究将预训练的扩散模型引入该任务。然而,由于大规模训练数据的匮乏与综合性基准的缺失,如何在立体转换中有效运用扩散模型以及如何准确评估立体效果,在很大程度上仍未得到充分探索。本工作提出了Mono2Stereo数据集,提供高质量的训练数据与基准,以支持对立体转换的深入探索。基于该数据集,我们开展了一项实证研究,得到两个主要发现:1)左右视图间的差异是细微的,但现有指标考虑整体像素,未能聚焦于对立体效果至关重要的区域;2)主流方法采用单阶段左视图到右视图生成或扭曲-修补流程,分别面临立体效果退化与图像失真的挑战。基于这些发现,我们提出了一种新的评估指标——立体交并比,该指标优先考虑视差,并在立体效果评估上与人类判断具有高度相关性。此外,我们提出了一个强基线模型,能够同时协调立体效果与图像质量,并显著超越当前主流方法。我们的代码与数据将开源以促进立体转换的进一步研究。模型可在mono2stereo-bench.github.io获取。