Neural networks trained with stochastic gradient descent exhibit an inductive bias towards simpler decision boundaries, typically converging to a narrow family of functions, and often fail to capture more complex features. This phenomenon raises concerns about the capacity of deep models to adequately learn and represent real-world datasets. Traditional approaches such as explicit regularization, data augmentation, architectural modifications, etc., have largely proven ineffective in encouraging the models to learn diverse features. In this work, we investigate the impact of pre-training models with noisy labels on the dynamics of SGD across various architectures and datasets. We show that pretraining promotes learning complex functions and diverse features in the presence of noise. Our experiments demonstrate that pre-training with noisy labels encourages gradient descent to find alternate minima that do not solely depend upon simple features, rather learns more complex and broader set of features, without hurting performance.
翻译:使用随机梯度下降训练的神经网络表现出对更简单决策边界的归纳偏好,通常收敛于狭窄的函数族,且往往无法捕获更复杂的特征。这一现象引发了对深度模型充分学习和表示现实世界数据集能力的担忧。传统方法如显式正则化、数据增强、架构调整等,在激励模型学习多样化特征方面大多被证明效果有限。本工作研究了在不同架构和数据集上,使用带噪声标签进行预训练对随机梯度下降动态特性的影响。我们证明,在存在噪声的情况下,预训练能够促进复杂函数和多样化特征的学习。实验表明,带噪声标签的预训练能够引导梯度下降找到不单纯依赖简单特征的替代极小值,从而学习到更复杂、更广泛的特征集合,且不会损害模型性能。