Relying on paired synthetic data, existing learning-based Computational Aberration Correction (CAC) methods are confronted with the intricate and multifaceted synthetic-to-real domain gap, which leads to suboptimal performance in real-world applications. In this paper, in contrast to improving the simulation pipeline, we deliver a novel insight into real-world CAC from the perspective of Unsupervised Domain Adaptation (UDA). By incorporating readily accessible unpaired real-world data into training, we formalize the Domain Adaptive CAC (DACAC) task, and then introduce a comprehensive Real-world aberrated images (Realab) dataset to benchmark it. The setup task presents a formidable challenge due to the intricacy of understanding the target optical degradation domain. To this intent, we propose a novel Quantized Domain-Mixing Representation (QDMR) framework as a potent solution to the issue. Centering around representing and quantizing the optical degradation which is consistent across different images, QDMR adapts the CAC model to the target domain from three key aspects: (1) reconstructing aberrated images of both domains by a VQGAN to learn a Domain-Mixing Codebook (DMC) characterizing the optical degradation; (2) modulating the deep features in CAC model with DMC to transfer the target domain knowledge; and (3) leveraging the trained VQGAN to generate pseudo target aberrated images from the source ones for convincing target domain supervision. Extensive experiments on both synthetic and real-world benchmarks reveal that the models with QDMR consistently surpass the competitive methods in mitigating the synthetic-to-real gap, which produces visually pleasant real-world CAC results with fewer artifacts. Codes and datasets are made publicly available at https://github.com/zju-jiangqi/QDMR.
翻译:依赖成对的合成数据,现有的基于学习的计算像差校正方法面临着复杂且多方面的合成到真实域差距,这导致其在真实世界应用中性能欠佳。本文中,与改进仿真流程不同,我们从无监督域适应的视角为真实世界计算像差校正提供了新的见解。通过将易于获取的非配对真实世界数据纳入训练,我们形式化了域自适应计算像差校正任务,并随后引入了一个全面的真实世界像差图像数据集作为其基准。由于理解目标光学退化域的复杂性,该设定任务提出了一个严峻的挑战。为此,我们提出了一种新颖的量化域混合表征框架作为该问题的有效解决方案。该框架围绕表征和量化在不同图像间保持一致的光学退化展开,从三个关键方面使计算像差校正模型适应目标域:(1)通过一个VQGAN重建两个域的像差图像,以学习表征光学退化的域混合码本;(2)利用域混合码本调制计算像差校正模型中的深层特征,以传递目标域知识;(3)利用训练好的VQGAN从源图像生成伪目标像差图像,以获得可靠的目标域监督。在合成和真实世界基准上的大量实验表明,采用量化域混合表征的模型在缩小合成到真实差距方面持续超越竞争方法,从而产生视觉愉悦、伪影更少的真实世界计算像差校正结果。代码和数据集已在 https://github.com/zju-jiangqi/QDMR 公开。