We consider nonlinear networks as perturbations of linear ones. Based on this approach, we present novel generalization bounds that become non-vacuous for networks that are close to being linear. The main advantage over the previous works which propose non-vacuous generalization bounds is that our bounds are a-priori: performing the actual training is not required for evaluating the bounds. To the best of our knowledge, they are the first non-vacuous generalization bounds for neural nets possessing this property.
翻译:我们将非线性网络视为线性网络的扰动。基于这一方法,我们提出了新的泛化界,该界对于接近线性的网络具有非平凡性。相较于先前提出非平凡泛化界的研究,我们的主要优势在于所提出的界是先验的:评估该界无需实际执行训练过程。据我们所知,这是首个具备此性质的神经网络非平凡泛化界。