Diffusion models have demonstrated impressive performance in various image generation, editing, enhancement and translation tasks. In particular, the pre-trained text-to-image stable diffusion models provide a potential solution to the challenging realistic image super-resolution (Real-ISR) and image stylization problems with their strong generative priors. However, the existing methods along this line often fail to keep faithful pixel-wise image structures. If extra skip connections between the encoder and the decoder of a VAE are used to reproduce details, additional training in image space will be required, limiting the application to tasks in latent space such as image stylization. In this work, we propose a pixel-aware stable diffusion (PASD) network to achieve robust Real-ISR and personalized image stylization. Specifically, a pixel-aware cross attention module is introduced to enable diffusion models perceiving image local structures in pixel-wise level, while a degradation removal module is used to extract degradation insensitive features to guide the diffusion process together with image high level information. An adjustable noise schedule is introduced to further improve the image restoration results. By simply replacing the base diffusion model with a stylized one, PASD can generate diverse stylized images without collecting pairwise training data, and by shifting the base model with an aesthetic one, PASD can bring old photos back to life. Extensive experiments in a variety of image enhancement and stylization tasks demonstrate the effectiveness of our proposed PASD approach. Our source codes are available at \url{https://github.com/yangxy/PASD/}.
翻译:扩散模型已在多种图像生成、编辑、增强与转换任务中展现出卓越性能。特别是预训练的文本到图像稳定扩散模型,凭借其强大的生成先验,为具有挑战性的真实图像超分辨率(Real-ISR)与图像风格化问题提供了潜在解决方案。然而,现有方法往往难以保持精确的像素级图像结构。若在VAE编码器与解码器之间引入额外跳跃连接以重建细节,则需在图像空间进行额外训练,这限制了其在潜在空间任务(如图像风格化)中的应用。本文提出一种像素感知稳定扩散(PASD)网络,以实现鲁棒的真实图像超分辨率与个性化图像风格化。具体而言,我们引入像素感知交叉注意力模块,使扩散模型能够在像素级别感知图像局部结构;同时采用退化去除模块提取对退化不敏感的特征,结合图像高层信息共同引导扩散过程。此外,引入可调节的噪声调度机制以进一步提升图像复原效果。通过将基础扩散模型替换为风格化模型,PASD无需收集成对训练数据即可生成多样化的风格化图像;而将基础模型切换为美学增强模型后,PASD能够实现老旧照片的生动复原。在多种图像增强与风格化任务上的大量实验验证了所提PASD方法的有效性。源代码发布于\url{https://github.com/yangxy/PASD/}。