With the advent of Generative AI, Single Image Super-Resolution (SISR) quality has seen substantial improvement, as the strong priors learned by Text-2-Image Diffusion (T2IDiff) Foundation Models (FM) can bridge the gap between High-Resolution (HR) and Low-Resolution (LR) images. However, flagship smartphone cameras have been slow to adopt generative models because strong generation can lead to undesirable hallucinations. For substantially degraded LR images, as seen in academia, strong generation is required and hallucinations are more tolerable because of the wide gap between LR and HR images. In contrast, in consumer photography, the LR image has substantially higher fidelity, requiring only minimal hallucination-free generation. We hypothesize that generation in SISR is controlled by the stringency and richness of the FM's conditioning feature. First, text features are high level features, which often cannot describe subtle textures in an image. Additionally, Smartphone LR images are at least $12MP$, whereas SISR networks built on T2IDiff FM are designed to perform inference on much smaller images ($<1MP$). As a result, SISR inference has to be performed on small patches, which often cannot be accurately described by text feature. To address these shortcomings, we introduce an SISR network built on a FM with lower-level feature conditioning, specifically DINOv2 features, which we call a Feature-to-Image Diffusion (F2IDiff) Foundation Model (FM). Lower level features provide stricter conditioning while being rich descriptors of even small patches.
翻译:随着生成式人工智能的出现,单图像超分辨率(SISR)的质量得到了显著提升,因为文本到图像扩散(T2IDiff)基础模型(FM)学习到的强大先验能够弥合高分辨率(HR)与低分辨率(LR)图像之间的差距。然而,旗舰智能手机相机在采用生成模型方面进展缓慢,因为强大的生成能力可能导致不理想的幻觉效应。对于学术界常见的严重退化LR图像,由于LR与HR图像之间存在巨大差距,需要强大的生成能力,且幻觉效应更具容忍度。相比之下,在消费级摄影中,LR图像保真度显著更高,仅需最小化、无幻觉的生成即可。我们假设,SISR中的生成过程受FM条件特征的严格性与丰富度控制。首先,文本特征属于高层特征,通常无法描述图像中的细微纹理。此外,智能手机LR图像至少为$12MP$,而基于T2IDiff FM构建的SISR网络设计用于在更小图像($<1MP$)上进行推理。因此,SISR推理必须在小型图像块上进行,而这些小块往往无法被文本特征准确描述。为克服这些不足,我们提出了一种基于FM的SISR网络,该网络采用更低层级的特征进行条件约束,具体为DINOv2特征,我们称之为特征到图像扩散(F2IDiff)基础模型(FM)。较低层级的特征在提供更严格条件约束的同时,即使对小型图像块也能成为丰富的描述符。