There remains a list of unanswered research questions on deep learning (DL), including the remarkable generalization power of overparametrized neural networks, the efficient optimization performance despite the non-convexity, and the mechanisms behind flat minima in generalization. In this paper, we adopt an information-theoretic perspective to explore the theoretical foundations of supervised classification using deep neural networks (DNNs). Our analysis introduces the concepts of fitting error and model risk, which, together with generalization error, constitute an upper bound on the expected risk. We demonstrate that the generalization errors are bounded by the complexity, influenced by both the smoothness of distribution and the sample size. Consequently, task complexity serves as a reliable indicator of the dataset's quality, guiding the setting of regularization hyperparameters. Furthermore, the derived upper bound fitting error links the back-propagated gradient, Neural Tangent Kernel (NTK), and the model's parameter count with the fitting error. Utilizing the triangle inequality, we establish an upper bound on the expected risk. This bound offers valuable insights into the effects of overparameterization, non-convex optimization, and the flat minima in DNNs.Finally, empirical verification confirms a significant positive correlation between the derived theoretical bounds and the practical expected risk, confirming the practical relevance of the theoretical findings.
翻译:深度学习(DL)领域仍存在一系列未解决的研究问题,包括过参数化神经网络的卓越泛化能力、尽管非凸性仍能实现高效优化的性能,以及平坦极小值在泛化中的机制。本文采用信息论视角,探索使用深度神经网络(DNN)进行监督分类的理论基础。我们的分析引入了拟合误差和模型风险的概念,它们与泛化误差共同构成了期望风险的上界。我们证明,泛化误差受复杂度限制,该复杂度受分布平滑度和样本量的共同影响。因此,任务复杂度可作为数据集质量的可靠指标,指导正则化超参数的设置。此外,推导出的拟合误差上界将反向传播梯度、神经正切核(NTK)以及模型参数量与拟合误差联系起来。利用三角不等式,我们建立了期望风险的上界。该上界为理解过参数化、非凸优化以及DNN中平坦极小值的影响提供了有价值的见解。最后,实证验证证实了推导出的理论界与实际期望风险之间存在显著正相关,从而确认了理论发现的实际相关性。