In recent years, remarkable advancements have been achieved in the field of image generation, primarily driven by the escalating demand for high-quality outcomes across various image generation subtasks, such as inpainting, denoising, and super resolution. A major effort is devoted to exploring the application of super-resolution techniques to enhance the quality of low-resolution images. In this context, our method explores in depth the problem of ship image super resolution, which is crucial for coastal and port surveillance. We investigate the opportunity given by the growing interest in text-to-image diffusion models, taking advantage of the prior knowledge that such foundation models have already learned. In particular, we present a diffusion-model-based architecture that leverages text conditioning during training while being class-aware, to best preserve the crucial details of the ships during the generation of the super-resoluted image. Since the specificity of this task and the scarcity availability of off-the-shelf data, we also introduce a large labeled ship dataset scraped from online ship images, mostly from ShipSpotting\footnote{\url{www.shipspotting.com}} website. Our method achieves more robust results than other deep learning models previously employed for super resolution, as proven by the multiple experiments performed. Moreover, we investigate how this model can benefit downstream tasks, such as classification and object detection, thus emphasizing practical implementation in a real-world scenario. Experimental results show flexibility, reliability, and impressive performance of the proposed framework over state-of-the-art methods for different tasks. The code is available at: https://github.com/LuigiSigillo/ShipinSight .
翻译:[translated abstract in Chinese]
近年来,随着各类图像生成子任务(如图像修复、去噪和超分辨率)对高质量结果的需求持续攀升,图像生成领域取得了显著进展。大量研究致力于探索超分辨率技术的应用以提升低分辨率图像质量。在此背景下,本方法深入研究了船舶图像超分辨率问题——该问题对沿海及港口监控至关重要。我们利用文本到图像扩散模型日益增长的研究兴趣所提供的机遇,借助这类基础模型已习得的先验知识。具体而言,我们提出了一种基于扩散模型的架构,该架构在训练过程中利用文本条件约束并具备类别感知能力,从而在生成超分辨率图像时最大程度保留船舶的关键细节。鉴于该任务的特殊性及现成数据的稀缺性,我们还引入了一个大规模标注船舶数据集,该数据集主要来自ShipSpotting网站(\url{www.shipspotting.com})爬取的在线船舶图像。通过多项实验验证,本方法相比以往用于超分辨率的其他深度学习模型取得了更稳健的结果。此外,我们探究了该模型如何惠及下游任务(如分类与目标检测),从而强调其在真实场景中的实际应用价值。实验结果表明,所提框架在不同任务上均优于当前最先进方法,展现出灵活性、可靠性与卓越性能。代码开源地址:https://github.com/LuigiSigillo/ShipinSight。