In this paper, we explore bounds on the expected risk when using deep neural networks for supervised classification from an information theoretic perspective. Firstly, we introduce model risk and fitting error, which are derived from further decomposing the empirical risk. Model risk represents the expected value of the loss under the model's predicted probabilities and is exclusively dependent on the model. Fitting error measures the disparity between the empirical risk and model risk. Then, we derive the upper bound on fitting error, which links the back-propagated gradient and the model's parameter count with the fitting error. Furthermore, we demonstrate that the generalization errors are bounded by the classification uncertainty, which is characterized by both the smoothness of the distribution and the sample size. Based on the bounds on fitting error and generalization, by utilizing the triangle inequality, we establish an upper bound on the expected risk. This bound is applied to provide theoretical explanations for overparameterization, non-convex optimization and flat minima in deep learning. Finally, empirical verification confirms a significant positive correlation between the derived theoretical bounds and the practical expected risk, thereby affirming the practical relevance of the theoretical findings.
翻译:本文从信息论视角探讨了使用深度神经网络进行监督分类时期望风险的上界。首先,我们引入模型风险与拟合误差,二者通过对经验风险的进一步分解得到。模型风险表示在模型预测概率下损失的期望值,且仅取决于模型本身。拟合误差衡量了经验风险与模型风险之间的差异。随后,我们推导出拟合误差的上界,该界将反向传播梯度与模型参数量同拟合误差联系起来。此外,我们证明了泛化误差受分类不确定性的约束,该不确定性由分布的平滑度与样本规模共同表征。基于拟合误差与泛化误差的界,通过利用三角不等式,我们建立了期望风险的上界。该界被用于为深度学习中的过参数化、非凸优化及平坦极小值现象提供理论解释。最后,实证验证确认了所得理论界与实际期望风险之间存在显著正相关,从而证实了理论发现的实际关联性。