In the last decade, we have witnessed the introduction of several novel deep neural network (DNN) architectures exhibiting ever-increasing performance across diverse tasks. Explaining the upward trend of their performance, however, remains difficult as different DNN architectures of comparable depth and width -- common factors associated with their expressive power -- may exhibit a drastically different performance even when trained on the same dataset. In this paper, we introduce the concept of the non-linearity signature of DNN, the first theoretically sound solution for approximately measuring the non-linearity of deep neural networks. Built upon a score derived from closed-form optimal transport mappings, this signature provides a better understanding of the inner workings of a wide range of DNN architectures and learning paradigms, with a particular emphasis on the computer vision task. We provide extensive experimental results that highlight the practical usefulness of the proposed non-linearity signature and its potential for long-reaching implications. The code for our work is available at https://github.com/qbouniot/AffScoreDeep
翻译:过去十年间,我们见证了多种新型深度神经网络架构的涌现,这些架构在不同任务中展现出持续提升的性能。然而,解释其性能提升趋势仍然困难,因为具有相近深度和宽度(通常与表达能力相关的因素)的不同DNN架构,即使在相同数据集上训练,也可能表现出截然不同的性能。本文提出了DNN非线性特征的概念,这是首个理论上可靠的解决方案,用于近似度量深度神经网络的非线性。该特征建立在闭式最优传输映射导出的评分基础上,能够更好地理解广泛DNN架构和学习范式的内部工作机制,尤其侧重于计算机视觉任务。我们提供了大量实验结果,突显了所提出的非线性特征的实际效用及其可能产生的深远影响。本工作的代码发布于https://github.com/qbouniot/AffScoreDeep。