Introducing training-time augmentations is a key technique to enhance generalization and prepare deep neural networks against test-time corruptions. Inspired by the success of generative diffusion models, we propose a novel approach coupling data augmentation, in the form of image noising and blurring, with label smoothing to align predicted label confidences with image degradation. The method is simple to implement, introduces negligible overheads, and can be combined with existing augmentations. We demonstrate improved robustness and uncertainty quantification on the corrupted image benchmarks of the CIFAR and TinyImageNet datasets.
翻译:引入训练时增强是提升泛化能力并使深度神经网络能够应对测试时数据损坏的关键技术。受生成扩散模型成功的启发,我们提出了一种新颖方法,将图像加噪和模糊形式的数据增强与标签平滑相结合,使预测标签置信度与图像退化程度对齐。该方法实现简单,引入的开销可忽略不计,并且可以与现有增强技术结合使用。我们在CIFAR和TinyImageNet数据集的损坏图像基准测试中展示了改进的鲁棒性和不确定性量化能力。