Deep neural networks (DNNs) are vulnerable to small adversarial perturbations of the inputs, posing a significant challenge to their reliability and robustness. Empirical methods such as adversarial training can defend against particular attacks but remain vulnerable to more powerful attacks. Alternatively, Lipschitz networks provide certified robustness to unseen perturbations but lack sufficient expressive power. To harness the advantages of both approaches, we design a novel two-step Optimal Transport induced Adversarial Defense (OTAD) model that can fit the training data accurately while preserving the local Lipschitz continuity. First, we train a DNN with a regularizer derived from optimal transport theory, yielding a discrete optimal transport map linking data to its features. By leveraging the map's inherent regularity, we interpolate the map by solving the convex integration problem (CIP) to guarantee the local Lipschitz property. OTAD is extensible to diverse architectures of ResNet and Transformer, making it suitable for complex data. For efficient computation, the CIP can be solved through training neural networks. OTAD opens a novel avenue for developing reliable and secure deep learning systems through the regularity of optimal transport maps. Empirical results demonstrate that OTAD can outperform other robust models on diverse datasets.
翻译:深度神经网络(DNNs)对输入的小型对抗性扰动具有脆弱性,这对其可靠性与鲁棒性构成了重大挑战。对抗训练等经验性方法可以防御特定攻击,但仍易受更强大攻击的影响。另一方面,利普希茨网络为未见扰动提供了可证明的鲁棒性,但缺乏足够的表达能力。为了结合两种方法的优势,我们设计了一种新颖的两步最优传输诱导对抗防御(OTAD)模型,该模型能够精确拟合训练数据,同时保持局部利普希茨连续性。首先,我们使用源自最优传输理论的正则化器训练一个DNN,得到一个将数据与其特征相关联的离散最优传输映射。通过利用该映射固有的正则性,我们通过求解凸积分问题(CIP)来插值该映射,以保证局部利普希茨性质。OTAD可扩展至ResNet和Transformer等多种架构,适用于复杂数据。为高效计算,CIP可通过训练神经网络来求解。OTAD通过最优传输映射的正则性,为开发可靠且安全的深度学习系统开辟了新途径。实证结果表明,OTAD在多种数据集上能够超越其他鲁棒模型。