Image inpainting is a fundamental task in computer vision, aiming to restore missing or corrupted regions in images realistically. While recent deep learning approaches have significantly advanced the state-of-the-art, challenges remain in maintaining structural continuity and generating coherent textures, particularly in large missing areas. Diffusion models have shown promise in generating high-fidelity images but often lack the structural guidance necessary for realistic inpainting. We propose a novel inpainting method that combines diffusion models with anisotropic Gaussian splatting to capture both local structures and global context effectively. By modeling missing regions using anisotropic Gaussian functions that adapt to local image gradients, our approach provides structural guidance to the diffusion-based inpainting network. The Gaussian splat maps are integrated into the diffusion process, enhancing the model's ability to generate high-fidelity and structurally coherent inpainting results. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques, producing visually plausible results with enhanced structural integrity and texture realism.
翻译:图像修复是计算机视觉中的一项基础任务,旨在真实地恢复图像中缺失或损坏的区域。尽管近期的深度学习方法显著提升了技术水平,但在保持结构连续性和生成连贯纹理方面仍存在挑战,尤其是在大面积缺失区域。扩散模型在生成高保真图像方面展现出潜力,但通常缺乏实现真实修复所需的结构性引导。我们提出一种新颖的修复方法,将扩散模型与各向异性高斯泼溅相结合,以有效捕捉局部结构和全局上下文。通过使用适应局部图像梯度的各向异性高斯函数对缺失区域进行建模,我们的方法为基于扩散的修复网络提供了结构性引导。高斯泼溅图被整合到扩散过程中,增强了模型生成高保真且结构连贯的修复结果的能力。大量实验表明,我们的方法优于现有技术,能够生成视觉上合理且具有增强结构完整性和纹理真实性的结果。