We use fixed point theory to analyze nonnegative neural networks, which we define as neural networks that map nonnegative vectors to nonnegative vectors. We first show that nonnegative neural networks with nonnegative weights and biases can be recognized as monotonic and (weakly) scalable mappings within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks having inputs and outputs of the same dimension, and these conditions are weaker than those recently obtained using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks with nonnegative weights and biases is an interval, which under mild conditions degenerates to a point. These results are then used to obtain the existence of fixed points of more general nonnegative neural networks. From a practical perspective, our results contribute to the understanding of the behavior of autoencoders, and we also offer valuable mathematical machinery for future developments in deep equilibrium models.
翻译:我们利用固定点理论分析非负神经网络,即定义将非负向量映射至非负向量的神经网络。首先证明具有非负权重与偏置的非负神经网络,在非线性Perron-Frobenius理论框架下可视为单调且(弱)可伸缩的映射。这一性质使我们能够为输入输出同维度的非负神经网络提供固定点存在的条件,且该条件弱于近期通过凸分析论证所得的结果。进一步地,我们证明具有非负权重与偏置的非负神经网络,其固定点集的几何结构是一个区间,并在温和条件下退化为单点。这些结果随后被用于推导更广义非负神经网络的固定点存在性。从实践角度而言,我们的研究有助于理解自编码器的行为特性,同时也为深度平衡模型的未来发展提供了有价值的数学工具。