We propose a novel method to increase shift invariance and prediction accuracy in convolutional neural networks. Specifically, we replace the first-layer combination "real-valued convolutions + max pooling" (RMax) by "complex-valued convolutions + modulus" (CMod), which is stable to translations, or shifts. To justify our approach, we claim that CMod and RMax produce comparable outputs when the convolution kernel is band-pass and oriented (Gabor-like filter). In this context, CMod can therefore be considered as a stable alternative to RMax. To enforce this property, we constrain the convolution kernels to adopt such a Gabor-like structure. The corresponding architecture is called mathematical twin, because it employs a well-defined mathematical operator to mimic the behavior of the original, freely-trained model. Our approach achieves superior accuracy on ImageNet and CIFAR-10 classification tasks, compared to prior methods based on low-pass filtering. Arguably, our approach's emphasis on retaining high-frequency details contributes to a better balance between shift invariance and information preservation, resulting in improved performance. Furthermore, it has a lower computational cost and memory footprint than concurrent work, making it a promising solution for practical implementation.
翻译:我们提出了一种新颖的方法来增强卷积神经网络的平移不变性和预测精度。具体而言,我们将第一层的"实值卷积+最大池化"组合替换为对平移(或偏移)稳定的"复值卷积+模运算"组合。为证明该方法的合理性,我们指出当卷积核为带通且具有方向性(类Gabor滤波器)时,CMod与RMax能产生可比较的输出。在此前提下,CMod可被视为RMax的稳定替代方案。为强化此特性,我们约束卷积核采用此类Gabor结构。相应架构被称为数学孪生模型,因其采用明确定义的数学算子来模拟原始自由训练模型的行为。与先前基于低通滤波的方法相比,我们的方法在ImageNet和CIFAR-10分类任务上实现了更高的准确率。可以认为,该方法注重保留高频细节,有助于在平移不变性和信息保留之间取得更好平衡,从而提升性能。此外,与同期工作相比,其计算成本和内存占用更低,为实际应用提供了有前景的解决方案。