Deep Neural Networks miss a principled model of their operation. A novel framework for supervised learning based on Topological Quantum Field Theory that looks particularly well suited for implementation on quantum processors has been recently explored. We propose using this framework to understand the problem of generalisation in Deep Neural Networks. More specifically, in this approach, Deep Neural Networks are viewed as the semi-classical limit of Topological Quantum Neural Networks. A framework of this kind explains the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalisation capabilities. We explore the paradigmatic case of the perceptron, which we implement as the semiclassical limit of Topological Quantum Neural Networks. We apply a novel algorithm we developed, showing that it obtains similar results to standard neural networks, but without the need for training (optimisation).
翻译:深度神经网络缺乏其运行机制的原理性模型。最近,一种基于拓扑量子场论的有监督学习新框架被提出,该框架特别适合在量子处理器上实现。我们建议利用这一框架来理解深度神经网络中的泛化问题。具体而言,在该方法中,深度神经网络被视为拓扑量子神经网络的半经典极限。此类框架能够解释深度神经网络在训练阶段的过拟合行为及其相应的泛化能力。我们探究了感知器这一典型范例,将其实现为拓扑量子神经网络的半经典极限。我们应用了一种新开发的算法,结果表明该算法可获得与标准神经网络相似的结果,且无需训练(优化)过程。