All-in-one image restoration tackles different types of degradations with a unified model instead of having task-specific, non-generic models for each degradation. The requirement to tackle multiple degradations using the same model can lead to high-complexity designs with fixed configuration that lack the adaptability to more efficient alternatives. We propose DyNet, a dynamic family of networks designed in an encoder-decoder style for all-in-one image restoration tasks. Our DyNet can seamlessly switch between its bulkier and lightweight variants, thereby offering flexibility for efficient model deployment with a single round of training. This seamless switching is enabled by our weights-sharing mechanism, forming the core of our architecture and facilitating the reuse of initialized module weights. Further, to establish robust weights initialization, we introduce a dynamic pre-training strategy that trains variants of the proposed DyNet concurrently, thereby achieving a 50% reduction in GPU hours. To tackle the unavailability of large-scale dataset required in pre-training, we curate a high-quality, high-resolution image dataset named Million-IRD having 2M image samples. We validate our DyNet for image denoising, deraining, and dehazing in all-in-one setting, achieving state-of-the-art results with 31.34% reduction in GFlops and a 56.75% reduction in parameters compared to baseline models. The source codes and trained models are available at https://github.com/akshaydudhane16/DyNet.
翻译:全能图像修复旨在通过统一模型处理不同类型退化,而非针对每种退化设计专用且非通用的模型。由于需使用同一模型处理多种退化,可能导致采用固定配置的高复杂度设计,而缺乏对更高效替代方案的适应性。我们提出DyNet——一个面向全能图像修复任务的编码器-解码器风格动态网络家族。该网络可通过单轮训练在厚重与轻量变体间无缝切换,从而为高效模型部署提供灵活性。这种无缝切换能力源于我们提出的权重共享机制,该机制构成架构核心,并促进初始化模块权重的复用。进一步地,为建立稳健的权重初始化,我们引入动态预训练策略,在训练过程中同步优化DyNet的多种变体,使GPU训练时长降低50%。针对预训练所需大规模数据集缺失问题,我们构建了名为Million-IRD的高质量高分辨率图像数据集,包含200万图像样本。我们在全能设置下验证了DyNet在图像去噪、去雨和去雾任务中的性能,与基线模型相比,实现了GFlops降低31.34%、参数量减少56.75%的顶尖成果。源代码与训练模型已开源至https://github.com/akshaydudhane16/DyNet。