While recent image warping approaches achieved remarkable success on existing benchmarks, they still require training separate models for each specific task and cannot generalize well to different camera models or customized manipulations. To address diverse types of warping in practice, we propose a Multiple-in-One image WArping model (named MOWA) in this work. Specifically, we mitigate the difficulty of multi-task learning by disentangling the motion estimation at both the region level and pixel level. To further enable dynamic task-aware image warping, we introduce a lightweight point-based classifier that predicts the task type, serving as prompts to modulate the feature maps for more accurate estimation. To our knowledge, this is the first work that solves multiple practical warping tasks in one single model. Extensive experiments demonstrate that our MOWA, which is trained on six tasks for multiple-in-one image warping, outperforms state-of-the-art task-specific models across most tasks. Moreover, MOWA also exhibits promising potential to generalize into unseen scenes, as evidenced by cross-domain and zero-shot evaluations. The code and more visual results can be found on the project page: https://kangliao929.github.io/projects/mowa/.
翻译:尽管近期的图像扭曲方法在现有基准测试中取得了显著成功,但它们仍需针对每个特定任务训练单独的模型,且无法很好地泛化到不同的相机模型或定制化操作。为解决实践中多样化的扭曲类型,本文提出了一种多合一图像扭曲模型(命名为MOWA)。具体而言,我们通过在区域级别和像素级别解耦运动估计来缓解多任务学习的困难。为进一步实现动态任务感知的图像扭曲,我们引入了一个轻量级的基于点的分类器来预测任务类型,作为提示信息来调制特征图以获得更准确的估计。据我们所知,这是首个在单一模型中解决多种实际扭曲任务的工作。大量实验表明,我们的MOWA模型在六个任务上进行多合一图像扭曲训练,在大多数任务上超越了最先进的专用任务模型。此外,跨域和零样本评估证明,MOWA在泛化到未见场景方面也展现出良好的潜力。代码及更多可视化结果可在项目页面找到:https://kangliao929.github.io/projects/mowa/。