Unfolding networks are interpretable networks emerging from iterative algorithms, incorporate prior knowledge of data structure, and are designed to solve inverse problems like compressed sensing, which deals with recovering data from noisy, missing observations. Compressed sensing finds applications in critical domains, from medical imaging to cryptography, where adversarial robustness is crucial to prevent catastrophic failures. However, a solid theoretical understanding of the performance of unfolding networks in the presence of adversarial attacks is still in its infancy. In this paper, we study the adversarial generalization of unfolding networks when perturbed with $l_2$-norm constrained attacks, generated by the fast gradient sign method. Particularly, we choose a family of state-of-the-art overaparameterized unfolding networks and deploy a new framework to estimate their adversarial Rademacher complexity. Given this estimate, we provide adversarial generalization error bounds for the networks under study, which are tight with respect to the attack level. To our knowledge, this is the first theoretical analysis on the adversarial generalization of unfolding networks. We further present a series of experiments on real-world data, with results corroborating our derived theory, consistently for all data. Finally, we observe that the family's overparameterization can be exploited to promote adversarial robustness, shedding light on how to efficiently robustify neural networks.
翻译:展开网络是从迭代算法中衍生出的可解释网络,它融入了数据结构的先验知识,旨在解决如压缩感知等逆问题,即从含噪声、不完整的观测中恢复数据。压缩感知在从医学成像到密码学等关键领域均有应用,其中对抗鲁棒性对于防止灾难性故障至关重要。然而,对于展开网络在对抗攻击下的性能,目前尚缺乏坚实的理论理解。本文研究了展开网络在受到由快速梯度符号方法生成的、$l_2$范数约束攻击扰动时的对抗性泛化能力。具体而言,我们选择了一系列最先进的过参数化展开网络,并采用一个新框架来估计其对抗性Rademacher复杂度。基于此估计,我们为所研究的网络提供了对抗性泛化误差界,该误差界在攻击水平方面是紧致的。据我们所知,这是首次对展开网络的对抗性泛化进行理论分析。我们进一步在真实世界数据上进行了一系列实验,结果在所有数据中均一致地证实了我们推导的理论。最后,我们观察到可以利用该系列网络的过参数化来增强对抗鲁棒性,这为如何高效地使神经网络具有鲁棒性提供了启示。