We show that there are no non-trivial closed subspaces of $L_2(\mathbb{R}^n)$ that are invariant under invertible affine transformations. We apply this result to neural networks showing that any nonzero $L_2(\mathbb{R})$ function is an adequate activation function in a one hidden layer neural network in order to approximate every function in $L_2(\mathbb{R})$ with any desired accuracy. This generalizes the universal approximation properties of neural networks in $L_2(\mathbb{R})$ related to Wiener's Tauberian Theorems. Our results extend to the spaces $L_p(\mathbb{R})$ with $p>1$.
翻译:我们证明了$L_2(\mathbb{R}^n)$中不存在非平凡的闭子空间在可逆仿射变换下保持不变。将此结果应用于神经网络,我们表明:对于单隐层神经网络,任何非零的$L_2(\mathbb{R})$函数均可作为合适的激活函数,以任意精度逼近$L_2(\mathbb{R})$中的所有函数。这一结论推广了与维纳陶伯定理相关的神经网络在$L_2(\mathbb{R})$中的通用逼近性质。我们的结果可进一步推广至$L_p(\mathbb{R})$空间,其中$p>1$。