The universal approximation theorem states that a neural network with one hidden layer can approximate continuous functions on compact sets with any desired precision. This theorem supports using neural networks for various applications, including regression and classification tasks. Furthermore, it is valid for real-valued neural networks and some hypercomplex-valued neural networks such as complex-, quaternion-, tessarine-, and Clifford-valued neural networks. However, hypercomplex-valued neural networks are a type of vector-valued neural network defined on an algebra with additional algebraic or geometric properties. This paper extends the universal approximation theorem for a wide range of vector-valued neural networks, including hypercomplex-valued models as particular instances. Precisely, we introduce the concept of non-degenerate algebra and state the universal approximation theorem for neural networks defined on such algebras.
翻译:通用逼近定理表明,具有单个隐藏层的神经网络能够在紧集上以任意精度逼近连续函数。该定理为神经网络在回归与分类等多种任务中的应用提供了理论支撑。该定理不仅适用于实值神经网络,亦适用于某些超复数神经网络,例如复值、四元数、镶嵌代数及克利福德代数神经网络。然而,超复数神经网络作为一类向量值神经网络,其定义需基于具有特定代数或几何性质的代数结构。本文将通用逼近定理扩展至更广泛的向量值神经网络范畴,并将超复数神经网络作为其特例纳入理论框架。具体而言,我们引入非退化代数的概念,并针对基于此类代数构建的神经网络提出相应的通用逼近定理。