The approximation properties of infinitely wide shallow neural networks heavily depend on the choice of the activation function. To understand this influence, we study embeddings between Barron spaces with different activation functions. These embeddings are proven by providing push-forward maps on the measures $\mu$ used to represent functions $f$. An activation function of particular interest is the rectified power unit ($\operatorname{RePU}$) given by $\operatorname{RePU}_s(x)=\max(0,x)^s$. For many commonly used activation functions, the well-known Taylor remainder theorem can be used to construct a push-forward map, which allows us to prove the embedding of the associated Barron space into a Barron space with a $\operatorname{RePU}$ as activation function. Moreover, the Barron spaces associated with the $\operatorname{RePU}_s$ have a hierarchical structure similar to the Sobolev spaces $H^m$.
翻译:无限宽浅层神经网络的逼近性质在很大程度上取决于激活函数的选择。为理解这种影响,我们研究了具有不同激活函数的Barron空间之间的嵌入关系。通过为用于表示函数$f$的测度$\mu$构造前推映射,我们证明了这些嵌入关系。一个特别值得关注的激活函数是修正幂单元($\operatorname{RePU}$),其定义为$\operatorname{RePU}_s(x)=\max(0,x)^s$。对于许多常用激活函数,可以利用著名的泰勒余项定理构造前推映射,从而证明将相关Barron空间嵌入到以$\operatorname{RePU}$为激活函数的Barron空间中。此外,与$\operatorname{RePU}_s$相关联的Barron空间具有类似于Sobolev空间$H^m$的层次结构。