Besides classical feed-forward neural networks such as multilayer perceptrons, also neural ordinary differential equations (neural ODEs) have gained particular interest in recent years. Neural ODEs can be interpreted as an infinite depth limit of feed-forward or residual neural networks. We study the input-output dynamics of finite and infinite depth neural networks with scalar output. In the finite depth case, the input is a state associated with a finite number of nodes, which maps under multiple non-linear transformations to the state of one output node. In analogy, a neural ODE maps an affine linear transformation of the input to an affine linear transformation of its time-$T$ map. We show that, depending on the specific structure of the network, the input-output map has different properties regarding the existence and regularity of critical points. These properties can be characterized via Morse functions, which are scalar functions where every critical point is non-degenerate. We prove that critical points cannot exist if the dimension of the hidden layer is monotonically decreasing or the dimension of the phase space is smaller than or equal to the input dimension. In the case that critical points exist, we classify their regularity depending on the specific architecture of the network. We show that except for a Lebesgue measure zero set in the weight space, each critical point is non-degenerate if for finite depth neural networks the underlying graph has no bottleneck, and if for neural ODEs, the affine linear transformations used have full rank. For each type of architecture, the proven properties are comparable in the finite and infinite depth cases. The established theorems allow us to formulate results on universal embedding and universal approximation, i.e., on the exact and approximate representation of maps by neural networks and neural ODEs.
翻译:除了多层感知机等经典前馈神经网络外,神经常微分方程(neural ODEs)近年来也受到了特别关注。神经 ODE 可解释为前馈或残差神经网络在深度趋于无穷时的极限。我们研究了具有标量输出的有限深度与无限深度神经网络的输入-输出动力学。在有限深度情形中,输入是与有限个节点相关联的状态,经过多次非线性变换映射至单个输出节点的状态。类似地,神经 ODE 将输入的仿射线性变换映射为其时间 $T$ 映射的仿射线性变换。我们证明,根据网络的具体结构,输入-输出映射在临界点的存在性与正则性方面具有不同性质。这些性质可通过 Morse 函数进行刻画,Morse 函数是指所有临界点均为非退化的标量函数。我们证明:若隐藏层维度单调递减,或相空间维度小于等于输入维度,则临界点不可能存在。在临界点存在的情形中,我们根据网络的具体架构对其正则性进行分类。我们证明:除了权重空间中一个 Lebesgue 测度为零的集合外,若有限深度神经网络的基础图不存在瓶颈,且神经 ODE 所使用的仿射线性变换具有满秩,则每个临界点都是非退化的。对于每种架构类型,所证明的性质在有限深度与无限深度情形中具有可比性。所建立的定理使我们能够表述关于通用嵌入与通用逼近的结果,即关于神经网络与神经 ODE 对映射的精确表示与近似表示。