Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory to account for its occurrence in deep learning remains yet to be established. In this study, we revisit the phenomenon of double descent and demonstrate that its occurrence is strongly influenced by the presence of noisy data. Through conducting a comprehensive analysis of the feature space of learned representations, we unveil that double descent arises in imperfect models trained with noisy data. We argue that double descent is a consequence of the model first learning the noisy data until interpolation and then adding implicit regularization via over-parameterization acquiring therefore capability to separate the information from the noise.
翻译:双倍下降现象是机器学习领域中一个反直觉的现象,研究人员已在多种模型和任务中观察到其表现。尽管已有研究针对特定场景提出了该现象的理论解释,但深度学习领域中双倍下降发生的公认理论尚未建立。本研究重新审视了双倍下降现象,并证明其出现与噪声数据的强相关性。通过对学习表示的特征空间进行全面分析,我们揭示了双倍下降发生在使用噪声数据训练的非完美模型中。我们认为双倍下降是模型先学习噪声数据直至插值,随后通过过参数化引入隐式正则化,从而获得分离信息与噪声能力的必然结果。