Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer vision and computational imaging. INRs represent images as continuous domain functions realized by a neural network taking spatial coordinates as inputs. However, unlike traditional pixel representations, little is known about the sample complexity of estimating images using INRs in the context of linear inverse problems. Towards this end, we study the sampling requirements for recovery of a continuous domain image from its low-pass Fourier coefficients by fitting a single hidden-layer INR with ReLU activation and a Fourier features layer using a generalized form of weight decay regularization. Our key insight is to relate minimizers of this non-convex parameter space optimization problem to minimizers of a convex penalty defined over an infinite-dimensional space of measures. We identify a sufficient number of samples for which an image realized by a width-1 INR is exactly recoverable by solving the INR training problem, and give a conjecture for the general width-$W$ case. To validate our theory, we empirically assess the probability of achieving exact recovery of images realized by low-width single hidden-layer INRs, and illustrate the performance of INR on super-resolution recovery of more realistic continuous domain phantom images.
翻译:隐式神经表示(INRs)已成为解决计算机视觉和计算成像中逆问题的强大工具。INRs将图像表示为以空间坐标为输入的神经网络所实现的连续域函数。然而,与传统像素表示不同,在线性逆问题背景下使用INRs估计图像的样本复杂度尚不明确。为此,我们研究了通过拟合具有ReLU激活和傅里叶特征层的单隐藏层INR(采用广义权重衰减正则化形式),从其低通傅里叶系数中恢复连续域图像所需的采样条件。我们的核心见解是将这一非凸参数空间优化问题的极小化解,与定义在无限维测度空间上的凸惩罚项的极小化解相联系。我们确定了宽度为1的INR所实现图像可通过求解INR训练问题精确恢复的充分样本量,并对一般宽度$W$情况提出了猜想。为验证理论,我们通过实验评估了低宽度单隐藏层INRs实现图像精确恢复的概率,并展示了INR在更现实的连续域幻影图像超分辨率恢复中的性能表现。