We make the case for neural network objects and extend an already existing neural network calculus explained in detail in Chapter 2 on \cite{bigbook}. Our aim will be to show that, yes, indeed, it makes sense to talk about neural network polynomials, neural network exponentials, sine, and cosines in the sense that they do indeed approximate their real number counterparts subject to limitations on certain of their parameters, $q$, and $\varepsilon$. While doing this, we show that the parameter and depth growth are only polynomial on their desired accuracy (defined as a 1-norm difference over $\mathbb{R}$), thereby showing that this approach to approximating, where a neural network in some sense has the structural properties of the function it is approximating is not entire intractable.
翻译:我们论证了神经网络对象的合理性,并扩展了文献《bigbook》第二章中详细阐述的现有神经网络演算。我们的目标是证明:在参数$q$和$\varepsilon$的特定限制下,谈论神经网络多项式、神经网络指数函数、正弦函数和余弦函数确实是有意义的——它们在逼近实数对应物时的确能成立。在此过程中,我们证明参数规模和网络深度增长仅取决于所需精度的多项式(定义为$\mathbb{R}$上的1-范数差),从而表明这种逼近方法(其中神经网络在某种意义上具有被逼近函数的结构特性)并非全然不可解。