Motivated by the growing theoretical understanding of neural networks that employ the Rectified Linear Unit (ReLU) as their activation function, we revisit the use of ReLU activation functions for learning implicit neural representations (INRs). Inspired by second order B-spline wavelets, we incorporate a set of simple constraints to the ReLU neurons in each layer of a deep neural network (DNN) to remedy the spectral bias. This in turn enables its use for various INR tasks. Empirically, we demonstrate that, contrary to popular belief, one can learn state-of-the-art INRs based on a DNN composed of only ReLU neurons. Next, by leveraging recent theoretical works which characterize the kinds of functions ReLU neural networks learn, we provide a way to quantify the regularity of the learned function. This offers a principled approach to selecting the hyperparameters in INR architectures. We substantiate our claims through experiments in signal representation, super resolution, and computed tomography, demonstrating the versatility and effectiveness of our method. The code for all experiments can be found at https://github.com/joeshenouda/relu-inrs.
翻译:受采用整流线性单元(ReLU)作为激活函数的神经网络理论认识不断深入的启发,我们重新审视了使用ReLU激活函数学习隐式神经表示(INR)的方法。受二阶B样条小波的启发,我们在深度神经网络(DNN)的每一层中对ReLU神经元施加一组简单约束,以修正频谱偏差,从而使其能够适用于各种INR任务。实验表明,与普遍观点相反,仅使用ReLU神经元构成的DNN即可学习到最先进的INR。进一步地,通过借鉴近期描述ReLU神经网络学习函数类型的理论工作,我们提出了一种量化学习函数正则性的方法,这为INR架构中超参数的选择提供了理论依据。我们在信号表示、超分辨率和计算机断层扫描等任务中通过实验验证了所提方法的有效性,证明了其通用性和优越性能。所有实验代码均公开于 https://github.com/joeshenouda/relu-inrs。