Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisited the phenomenon of double descent and discussed the conditions of its occurrence. This paper introduces the concept of class-activation matrices and a methodology for estimating the effective complexity of functions, on which we unveil that over-parameterized models exhibit more distinct and simpler class patterns in hidden activations compared to under-parameterized ones. We further looked into the interpolation of noisy labelled data among clean representations and demonstrated overfitting w.r.t. expressive capacity. By comprehensively analysing hypotheses and presenting corresponding empirical evidence that either validates or contradicts these hypotheses, we aim to provide fresh insights into the phenomenon of double descent and benign over-parameterization and facilitate future explorations. By comprehensively studying different hypotheses and the corresponding empirical evidence either supports or challenges these hypotheses, our goal is to offer new insights into the phenomena of double descent and benign over-parameterization, thereby enabling further explorations in the field. The source code is available at https://github.com/Yufei-Gu-451/sparse-generalization.git.
翻译:双重下降现象是机器学习领域中反直觉的表现,研究者已在多种模型和任务中观察到该现象。尽管已有部分理论解释针对特定场景下的该现象,但深度学习领域关于其发生机制仍缺乏公认的理论框架。本研究重新审视了双重下降现象,并探讨了其发生条件。本文引入类激活矩阵概念及函数有效复杂度估算方法,揭示超参数化模型相较于欠参数化模型在隐层激活中呈现出更清晰、更简单的类别模式。我们进一步探究了含噪标签数据在干净表征中的插值现象,并证明了关于表达能力过拟合的结论。通过系统分析各类假说并呈现支持或反驳这些假说的实证证据,我们旨在为双重下降与良性过参数化现象提供全新视角,推动该领域的后续探索。相关源代码已开源至 https://github.com/Yufei-Gu-451/sparse-generalization.git。