Deep neural networks (DNNs) are vulnerable to small adversarial perturbations of the inputs, posing a significant challenge to their reliability and robustness. Empirical methods such as adversarial training can defend against particular attacks but remain vulnerable to more powerful attacks. Alternatively, Lipschitz networks provide certified robustness to unseen perturbations but lack sufficient expressive power. To harness the advantages of both approaches, we design a novel two-step Optimal Transport induced Adversarial Defense (OTAD) model that can fit the training data accurately while preserving the local Lipschitz continuity. First, we train a DNN with a regularizer derived from optimal transport theory, yielding a discrete optimal transport map linking data to its features. By leveraging the map's inherent regularity, we interpolate the map by solving the convex integration problem (CIP) to guarantee the local Lipschitz property. OTAD is extensible to diverse architectures of ResNet and Transformer, making it suitable for complex data. For efficient computation, the CIP can be solved through training neural networks. OTAD opens a novel avenue for developing reliable and secure deep learning systems through the regularity of optimal transport maps. Empirical results demonstrate that OTAD can outperform other robust models on diverse datasets.
翻译:深度神经网络(DNNs)对输入的小幅对抗性扰动具有脆弱性,这对其可靠性与鲁棒性构成了重大挑战。对抗训练等经验性方法可以防御特定攻击,但对更强大的攻击仍显脆弱。另一方面,Lipschitz网络虽能提供对未见扰动的可证明鲁棒性,但其表达能力不足。为结合两种方法的优势,我们设计了一种新颖的两步最优传输诱导对抗防御(OTAD)模型,该模型既能精确拟合训练数据,又能保持局部Lipschitz连续性。首先,我们利用源自最优传输理论的正则化项训练DNN,得到一个将数据与其特征相关联的离散最优传输映射。通过利用该映射固有的正则性,我们通过求解凸积分问题(CIP)对映射进行插值,以保证局部Lipschitz性质。OTAD可扩展至ResNet和Transformer等多种架构,适用于复杂数据。为高效计算,CIP可通过训练神经网络求解。OTAD借助最优传输映射的正则性,为开发可靠安全的深度学习系统开辟了新途径。实验结果表明,OTAD在多种数据集上能超越其他鲁棒模型。