Polynomial neural networks have been implemented in a range of applications and present an advantageous framework for theoretical machine learning. A polynomial neural network of fixed architecture and activation degree gives an algebraic map from the network's weights to a set of polynomials. The image of this map is the space of functions representable by the network. Its Zariski closure is an affine variety known as a neurovariety. The dimension of a polynomial neural network's neurovariety provides a measure of its expressivity. In this work, we introduce the notion of the activation threshold of a network architecture which expresses when the dimension of a neurovariety achieves its theoretical maximum. In addition, we prove expressiveness results for polynomial neural networks with equi-width~architectures.
翻译:多项式神经网络已在多种应用中得以实现,并为理论机器学习提供了一个有利的框架。具有固定架构和激活次数的多项式神经网络定义了一个从网络权重到一组多项式的代数映射。该映射的像即是该网络可表示的函数空间。其Zariski闭包是一个仿射簇,称为神经簇。多项式神经网络的神经簇维度为其表达能力提供了一种度量。在本工作中,我们引入了网络架构的激活阈值概念,它描述了神经簇维度何时达到其理论最大值。此外,我们证明了具有等宽架构的多项式神经网络的表达能力结果。