Generative models have emerged as powerful priors for solving inverse problems. These models typically represent a class of natural signals using a single fixed complexity or dimensionality. This can be limiting: depending on the problem, a fixed complexity may result in high representation error if too small, or overfitting to noise if too large. We develop tunable-complexity priors for diffusion models, normalizing flows, and variational autoencoders, leveraging nested dropout. Across tasks including compressed sensing, inpainting, denoising, and phase retrieval, we show empirically that tunable priors consistently achieve lower reconstruction errors than fixed-complexity baselines. In the linear denoising setting, we provide a theoretical analysis that explicitly characterizes how the optimal tuning parameter depends on noise and model structure. This work demonstrates the potential of tunable-complexity generative priors and motivates both the development of supporting theory and their application across a wide range of inverse problems.
翻译:生成模型已成为解决逆问题的强大先验工具。这些模型通常使用单一固定的复杂度或维度来表示一类自然信号。这存在局限性:根据具体问题,固定的复杂度若过小可能导致高表示误差,若过大则易对噪声过拟合。我们利用嵌套丢弃技术,为扩散模型、归一化流和变分自编码器开发了可调复杂度的先验模型。在压缩感知、图像修复、去噪和相位恢复等任务中,我们通过实验证明,可调先验模型始终比固定复杂度基线获得更低的重构误差。在线性去噪场景中,我们提供了理论分析,明确刻画了最优调节参数如何依赖于噪声和模型结构。这项工作展示了可调复杂度生成先验的潜力,并推动了相关支撑理论的发展及其在广泛逆问题中的应用。