In this paper, we propose a set of transform-based neural network layers as an alternative to the $3\times3$ Conv2D layers in Convolutional Neural Networks (CNNs). The proposed layers can be implemented based on orthogonal transforms such as the Discrete Cosine Transform (DCT), Hadamard transform (HT), and biorthogonal Block Wavelet Transform (BWT). Furthermore, by taking advantage of the convolution theorems, convolutional filtering operations are performed in the transform domain using element-wise multiplications. Trainable soft-thresholding layers, that remove noise in the transform domain, bring nonlinearity to the transform domain layers. Compared to the Conv2D layer, which is spatial-agnostic and channel-specific, the proposed layers are location-specific and channel-specific. Moreover, these proposed layers reduce the number of parameters and multiplications significantly while improving the accuracy results of regular ResNets on the ImageNet-1K classification task. Furthermore, they can be inserted with a batch normalization layer before the global average pooling layer in the conventional ResNets as an additional layer to improve classification accuracy.
翻译:本文提出一组基于变换的神经网络层,作为卷积神经网络(CNN)中$3\times3$ Conv2D层的替代方案。所提出的层可基于离散余弦变换(DCT)、阿达玛变换(HT)及双正交块小波变换(BWT)等正交变换实现。此外,利用卷积定理,通过在变换域中进行逐元素乘法来执行卷积滤波操作。可训练的软阈值层在变换域中去除噪声,为变换域层引入非线性。与空间无关且通道特定的Conv2D层相比,本文提出的层具有位置特定性和通道特定性。更重要的是,这些层在显著减少参数数量和乘法运算次数的同时,提高了常规ResNet在ImageNet-1K分类任务上的准确率。此外,它们可作为附加层插入常规ResNet的全局平均池化层之前的批归一化层之后,进一步提升分类精度。