Besides classical feed-forward neural networks, also neural ordinary differential equations (neural ODEs) have gained particular interest in recent years. Neural ODEs can be interpreted as an infinite depth limit of feed-forward or residual neural networks. We study the input-output dynamics of finite and infinite depth neural networks with scalar output. In the finite depth case, the input is a state associated with a finite number of nodes, which maps under multiple non-linear transformations to the state of one output node. In analogy, a neural ODE maps an affine linear transformation of the input to an affine linear transformation of its time-$T$ map. We show that depending on the specific structure of the network, the input-output map has different properties regarding the existence and regularity of critical points, which can be characterized via Morse functions. We prove that critical points cannot exist if the dimension of the hidden layer is monotonically decreasing or the dimension of the phase space is smaller or equal to the input dimension. In the case that critical points exist, we classify their regularity depending on the specific architecture of the network. We show that except for a Lebesgue measure zero set in the weight space, each critical point is non-degenerate, if for finite depth neural networks the underlying graph has no bottleneck, and if for neural ODEs, the affine linear transformations used have full rank. For each type of architecture, the proven properties are comparable in the finite and the infinite depth case. The established theorems allow us to formulate results on universal embedding, i.e., on the exact representation of maps by neural networks and neural ODEs. Our dynamical systems viewpoint on the geometric structure of the input-output map provides a fundamental understanding of why certain architectures perform better than others.
翻译:除了经典的前馈神经网络,神经常微分方程(neural ODEs)近年来也受到了特别的关注。神经 ODE 可以解释为前馈或残差神经网络的无限深度极限。我们研究了具有标量输出的有限深度与无限深度神经网络的输入-输出动力学。在有限深度情形中,输入是与有限个节点相关联的状态,经过多次非线性变换映射至一个输出节点的状态。类似地,神经 ODE 将输入的仿射线性变换映射为其时间-$T$ 映射的仿射线性变换。我们证明,根据网络的具体结构,输入-输出映射在临界点的存在性与正则性方面具有不同性质,这些性质可通过 Morse 函数进行刻画。我们证明,若隐藏层维度单调递减,或相空间维度小于等于输入维度,则临界点不可能存在。在临界点存在的情形中,我们根据网络的具体架构对其正则性进行分类。我们证明,除了权重空间中的一个 Lebesgue 零测集外,每个临界点都是非退化的——前提是对于有限深度神经网络,其底层图不存在瓶颈;对于神经 ODE,所使用的仿射线性变换具有满秩。对于每种架构类型,所证明的性质在有限深度与无限深度情形中具有可比性。所建立的定理使我们能够阐述关于通用嵌入的结果,即关于神经网络与神经 ODE 对映射的精确表示。我们从动力系统的视角对输入-输出映射的几何结构进行分析,为理解某些架构为何优于其他架构提供了根本性的见解。