We point out that (continuous or discontinuous) piecewise linear functions on a convex polytope mesh can be represented by two-hidden-layer ReLU neural networks in a weak sense. In addition, the numbers of neurons of the two hidden layers required to weakly represent are accurately given based on the numbers of polytopes and hyperplanes involved in this mesh. The results naturally hold for constant and linear finite element functions. Such weak representation establishes a bridge between two-hidden-layer ReLU neural networks and finite element functions, and leads to a perspective for analyzing approximation capability of ReLU neural networks in $L^p$ norm via finite element functions. Moreover, we discuss the strict representation for tensor finite element functions via the recent tensor neural networks.
翻译:我们指出,在凸多面体网格上的(连续或非连续)分段线性函数可以在弱意义下由双层ReLU神经网络表示。此外,实现弱表示所需两个隐藏层的神经元数量,可根据该网格所涉及的多面体与超平面数量精确给出。该结果自然适用于常数及线性有限元函数。此类弱表示为双层ReLU神经网络与有限元函数之间建立了桥梁,并为通过有限元函数分析ReLU神经网络在$L^p$范数下的逼近能力提供了视角。此外,我们借助近期提出的张量神经网络,探讨了张量有限元函数的严格表示问题。