Deep Neural Networks (DNNs) are well-known to act as over-parameterized deep image priors (DIP) that regularize various image inverse problems. Meanwhile, researchers also proposed extremely compact, under-parameterized image priors (e.g., deep decoder) that are strikingly competent for image restoration too, despite a loss of accuracy. These two extremes push us to think whether there exists a better solution in the middle: between over- and under-parameterized image priors, can one identify "intermediate" parameterized image priors that achieve better trade-offs between performance, efficiency, and even preserving strong transferability? Drawing inspirations from the lottery ticket hypothesis (LTH), we conjecture and study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity, stated as: given an over-parameterized DNN-based image prior, it will contain a sparse subnetwork that can be trained in isolation, to match the original DNN's performance when being applied as a prior to various image inverse problems. Our results validate the superiority of LIPs: we can successfully locate the LIP subnetworks from over-parameterized DIPs at substantial sparsity ranges. Those LIP subnetworks significantly outperform deep decoders under comparably compact model sizes (by often fully preserving the effectiveness of their over-parameterized counterparts), and they also possess high transferability across different images as well as restoration task types. Besides, we also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior (in contrast to untrained DIP or deep decoder), and confirm its validity in this setting too. To our best knowledge, this is the first time that LTH is demonstrated to be relevant in the context of inverse problems or image priors.
翻译:深度神经网络(DNNs)作为过参数化的深度图像先验(DIP)在各类图像逆问题中起到正则化作用已广为人知。与此同时,研究者也提出了极度紧凑的欠参数化图像先验(例如深度解码器),尽管精度有所损失,但在图像复原任务中同样表现出惊人的能力。这两种极端情况促使我们思考是否存在一种折中的更优方案:在过参数化与欠参数化的图像先验之间,能否找到一种“中间”参数化的图像先验,在性能、效率乃至保持强可迁移性之间实现更好的权衡?受彩票假设(LTH)启发,我们提出并研究了一种新颖的“彩票图像先验”(LIP),其通过利用DNN固有的稀疏性实现,表述为:给定一个基于DNN的过参数化图像先验,其中将包含一个稀疏子网络,该子网络可以被独立训练,当作为先验应用于各类图像逆问题时,能达到与原DNN相当的性能。我们的实验结果验证了LIP的优越性:我们能够在较大的稀疏度范围内,成功从过参数化的DIP中定位出LIP子网络。这些LIP子网络在模型尺寸可比的情况下显著优于深度解码器(通常能完全保持其过参数化对应版本的有效性),并且在不同图像以及不同复原任务类型间也具备高可迁移性。此外,我们还将LIP扩展到压缩感知图像重建任务中,其中使用预训练的GAN生成器作为先验(与未经训练的DIP或深度解码器不同),并在此设置下也证实了其有效性。据我们所知,这是首次证明LTH在逆问题或图像先验的背景下具有相关性。