By reinterpreting a robust discriminative classifier as Energy-based Model (EBM), we offer a new take on the dynamics of adversarial training (AT). Our analysis of the energy landscape during AT reveals that untargeted attacks generate adversarial images much more in-distribution (lower energy) than the original data from the point of view of the model. Conversely, we observe the opposite for targeted attacks. On the ground of our thorough analysis, we present new theoretical and practical results that show how interpreting AT energy dynamics unlocks a better understanding: (1) AT dynamic is governed by three phases and robust overfitting occurs in the third phase with a drastic divergence between natural and adversarial energies (2) by rewriting the loss of TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) in terms of energies, we show that TRADES implicitly alleviates overfitting by means of aligning the natural energy with the adversarial one (3) we empirically show that all recent state-of-the-art robust classifiers are smoothing the energy landscape and we reconcile a variety of studies about understanding AT and weighting the loss function under the umbrella of EBMs. Motivated by rigorous evidence, we propose Weighted Energy Adversarial Training (WEAT), a novel sample weighting scheme that yields robust accuracy matching the state-of-the-art on multiple benchmarks such as CIFAR-10 and SVHN and going beyond in CIFAR-100 and Tiny-ImageNet. We further show that robust classifiers vary in the intensity and quality of their generative capabilities, and offer a simple method to push this capability, reaching a remarkable Inception Score (IS) and FID using a robust classifier without training for generative modeling. The code to reproduce our results is available at http://github.com/OmnAI-Lab/Robust-Classifiers-under-the-lens-of-EBM/ .
翻译:通过将鲁棒判别式分类器重新解释为能量模型,我们为对抗训练的动态机制提供了新的视角。对对抗训练过程中能量景观的分析表明,从模型的角度来看,无目标攻击生成的对抗图像比原始数据更接近分布内(能量更低)。相反,我们观察到有目标攻击呈现相反特性。基于系统分析,我们提出了新的理论和实证结果,证明从能量动态角度解读对抗训练能带来更深入的理解:(1)对抗训练动态受三个阶段支配,鲁棒过拟合发生在第三阶段,此时自然样本与对抗样本的能量出现显著分化;(2)通过将TRADES损失函数重写为能量形式,我们证明TRADES通过对齐自然能量与对抗能量来隐式缓解过拟合;(3)实证表明所有近期先进鲁棒分类器都在平滑能量景观,并将多种关于对抗训练理解和损失函数加权的研究统一在能量模型框架下。基于严谨证据,我们提出加权能量对抗训练,这种新颖的样本加权方案在CIFAR-10和SVHN等多个基准测试中达到先进鲁棒准确率,在CIFAR-100和Tiny-ImageNet上表现更优。研究进一步发现不同鲁棒分类器的生成能力存在强度与质量差异,并提出无需生成建模训练的增强方法,使鲁棒分类器获得了卓越的初始分数和FID指标。代码已开源:http://github.com/OmnAI-Lab/Robust-Classifiers-under-the-lens-of-EBM/。