We investigate the impact of entropy change in deep learning systems by noise injection at different levels, including the embedding space and the image. The series of models that employ our methodology are collectively known as Noisy Neural Networks (NoisyNN), with examples such as NoisyViT and NoisyCNN. Noise is conventionally viewed as a harmful perturbation in various deep learning architectures, such as convolutional neural networks (CNNs) and vision transformers (ViTs), as well as different learning tasks like image classification and transfer learning. However, this work shows noise can be an effective way to change the entropy of the learning system. We demonstrate that specific noise can boost the performance of various deep models under certain conditions. We theoretically prove the enhancement gained from positive noise by reducing the task complexity defined by information entropy and experimentally show the significant performance gain in large image datasets, such as the ImageNet. Herein, we use the information entropy to define the complexity of the task. We categorize the noise into two types, positive noise (PN) and harmful noise (HN), based on whether the noise can help reduce the task complexity. Extensive experiments of CNNs and ViTs have shown performance improvements by proactively injecting positive noise, where we achieved an unprecedented top 1 accuracy of 95$\%$ on ImageNet. Both theoretical analysis and empirical evidence have confirmed that the presence of positive noise, can benefit the learning process, while the traditionally perceived harmful noise indeed impairs deep learning models. The different roles of noise offer new explanations for deep models on specific tasks and provide a new paradigm for improving model performance. Moreover, it reminds us that we can influence the performance of learning systems via information entropy change.
翻译:本研究通过在嵌入空间和图像等不同层级注入噪声,深入探究深度学习系统中熵变化的影响。采用本方法的一系列模型统称为Noisy Neural Networks (NoisyNN),典型实例如NoisyViT与NoisyCNN。在卷积神经网络(CNNs)、视觉Transformer(ViTs)等各类深度学习架构中,以及在图像分类、迁移学习等不同学习任务中,噪声传统上被视为有害扰动。然而,本研究表明噪声可成为改变学习系统熵值的有效途径。我们证明特定噪声能在特定条件下提升多种深度模型的性能。理论上,我们通过降低信息熵定义的任务复杂度,证明了正向噪声带来的性能增益;实验上,我们在ImageNet等大型图像数据集上展示了显著的性能提升。本文采用信息熵定义任务复杂度,并根据噪声能否降低任务复杂度将其划分为正向噪声(PN)与有害噪声(HN)。在CNN与ViT上的大量实验表明,主动注入正向噪声能提升模型性能,我们在ImageNet上实现了95%的top-1准确率这一突破性结果。理论分析与实验证据共同证实:正向噪声的存在有利于学习过程,而传统认知中的有害噪声确实会损害深度学习模型。噪声的双重作用为特定任务下的深度模型提供了新的理论解释,并为提升模型性能开辟了新范式。此外,本研究启示我们可通过信息熵变化来调控学习系统的性能表现。