We introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse underwater scenes. We find that the AquaFused images preserve over 94% depth consistency and 90-95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision applications.
翻译:我们提出了AquaFuse的概念,这是一种基于物理的方法,用于合成水下图像中的水体属性。我们推导了一种水体融合的闭式解,以促进真实的数据增强和几何一致的水下场景渲染。AquaFuse利用光在水下传播的物理特性,将一个场景的水体合成到另一场景的物体内容上。与数据驱动的风格迁移不同,AquaFuse保持了输入场景的深度一致性和物体几何结构。我们通过对多种水下场景的综合实验验证了这一独特特性。研究发现,AquaFuse生成的图像能保持输入场景94%以上的深度一致性和90-95%的结构相似性。我们还证明,该方法在适应固有水体融合过程的同时,通过保持物体几何结构,能够生成精确的三维视图合成。AquaFuse为水下成像和机器人视觉应用开辟了通过几何保持式风格迁移进行数据增强的新研究方向。