To investigate neural network parameters, it is easier to study the distribution of parameters than to study the parameters in each neuron. The ridgelet transform is a pseudo-inverse operator that maps a given function $f$ to the parameter distribution $\gamma$ so that a network $\mathtt{NN}[\gamma]$ reproduces $f$, i.e. $\mathtt{NN}[\gamma]=f$. For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed. However, for a variety of modern neural network architectures, the closed-form expression has not been known. In this paper, we explain a systematic method using Fourier expressions to derive ridgelet transforms for a variety of modern networks such as networks on finite fields $\mathbb{F}_p$, group convolutional networks on abstract Hilbert space $\mathcal{H}$, fully-connected networks on noncompact symmetric spaces $G/K$, and pooling layers, or the $d$-plane ridgelet transform.
翻译:为研究神经网络参数,分析参数分布通常比逐一研究每个神经元参数更为简便。脊波变换是一种伪逆算子,它将给定函数$f$映射至参数分布$\gamma$,使得网络$\mathtt{NN}[\gamma]$能够重构$f$,即$\mathtt{NN}[\gamma]=f$。对于欧几里得空间中的深度二值全连接网络,脊波变换的闭式表达式已被发现,从而可描述参数的分布规律。然而,对于多种现代神经网络架构,其闭式表达式仍属未知。本文提出一种基于傅立叶表达的系统性方法,可推导多种现代网络的脊波变换,包括有限域$\mathbb{F}_p$上的网络、抽象希尔伯特空间$\mathcal{H}$上的群卷积网络、非紧对称空间$G/K$上的全连接网络、池化层,以及$d$平面脊波变换。