Given two Deep Neural Network (DNN) classifiers with the same input and output domains, our goal is to quantify the robustness of the two networks in relation to each other. Towards this, we introduce the notion of Relative Safety Margins (RSMs). Intuitively, given two classes and a common input, RSM of one classifier with respect to another reflects the relative margins with which decisions are made. The proposed notion is relevant in the context of several applications domains, including to compare a trained network and its corresponding compact network (e.g., pruned, quantized, distilled network). Not only can RSMs establish whether decisions are preserved, but they can also quantify their qualities. We also propose a framework to establish safe bounds on RSM gains or losses given an input and a family of perturbations. We evaluate our approach using the MNIST, CIFAR10, and two real-world medical datasets, to show the relevance of our results.
翻译:给定两个具有相同输入和输出域的深度神经网络分类器,我们的目标是量化这两个网络相对于彼此的鲁棒性。为此,我们引入了相对安全裕度的概念。直观而言,给定两个类别和一个共同输入,一个分类器相对于另一个分类器的RSM反映了决策时所依据的相对裕度。所提出的概念在多个应用领域中具有相关性,包括用于比较训练网络及其对应的紧凑网络。RSM不仅能确定决策是否被保留,还能量化其质量。我们还提出了一个框架,用于在给定输入和扰动族的情况下,建立RSM增益或损失的安全边界。我们使用MNIST、CIFAR10以及两个真实世界医学数据集评估了我们的方法,以展示我们结果的相关性。